00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2408 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3673 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.019 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.023 The recommended git tool is: git 00:00:00.024 using credential 00000000-0000-0000-0000-000000000002 00:00:00.026 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.040 Fetching changes from the remote Git repository 00:00:00.043 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.060 Using shallow fetch with depth 1 00:00:00.060 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.060 > git --version # timeout=10 00:00:00.082 > git --version # 'git version 2.39.2' 00:00:00.082 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.105 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.105 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.352 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.365 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.377 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.377 > git config core.sparsecheckout # timeout=10 00:00:02.390 > git read-tree -mu HEAD # timeout=10 00:00:02.404 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.427 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.427 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.736 [Pipeline] Start of Pipeline 00:00:02.749 [Pipeline] library 00:00:02.751 Loading library shm_lib@master 00:00:02.751 Library shm_lib@master is cached. Copying from home. 00:00:02.770 [Pipeline] node 00:00:02.793 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.794 [Pipeline] { 00:00:02.801 [Pipeline] catchError 00:00:02.802 [Pipeline] { 00:00:02.814 [Pipeline] wrap 00:00:02.823 [Pipeline] { 00:00:02.834 [Pipeline] stage 00:00:02.836 [Pipeline] { (Prologue) 00:00:02.857 [Pipeline] echo 00:00:02.859 Node: VM-host-WFP7 00:00:02.867 [Pipeline] cleanWs 00:00:02.878 [WS-CLEANUP] Deleting project workspace... 00:00:02.878 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.922 [WS-CLEANUP] done 00:00:03.133 [Pipeline] setCustomBuildProperty 00:00:03.197 [Pipeline] httpRequest 00:00:03.867 [Pipeline] echo 00:00:03.869 Sorcerer 10.211.164.101 is alive 00:00:03.879 [Pipeline] retry 00:00:03.881 [Pipeline] { 00:00:03.896 [Pipeline] httpRequest 00:00:03.901 HttpMethod: GET 00:00:03.901 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.902 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.903 Response Code: HTTP/1.1 200 OK 00:00:03.904 Success: Status code 200 is in the accepted range: 200,404 00:00:03.904 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.062 [Pipeline] } 00:00:04.078 [Pipeline] // retry 00:00:04.086 [Pipeline] sh 00:00:04.374 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.392 [Pipeline] httpRequest 00:00:04.772 [Pipeline] echo 00:00:04.773 Sorcerer 10.211.164.101 is alive 00:00:04.783 [Pipeline] retry 00:00:04.785 [Pipeline] { 00:00:04.798 [Pipeline] httpRequest 00:00:04.803 HttpMethod: GET 00:00:04.804 URL: http://10.211.164.101/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:04.805 Sending request to url: http://10.211.164.101/packages/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:04.807 Response Code: HTTP/1.1 200 OK 00:00:04.808 Success: Status code 200 is in the accepted range: 200,404 00:00:04.808 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:17.128 [Pipeline] } 00:00:17.147 [Pipeline] // retry 00:00:17.157 [Pipeline] sh 00:00:17.443 + tar --no-same-owner -xf spdk_35cd3e84d4a92eacc8c9de6c2cd81450ef5bcc54.tar.gz 00:00:19.998 [Pipeline] sh 00:00:20.288 + git -C spdk log --oneline -n5 00:00:20.288 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:00:20.288 01a2c4855 bdev/passthru: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:00:20.288 9094b9600 bdev: Assert to check if I/O pass dif_check_flags not enabled by bdev 00:00:20.288 2e10c84c8 nvmf: Expose DIF type of namespace to host again 00:00:20.288 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:00:20.309 [Pipeline] withCredentials 00:00:20.320 > git --version # timeout=10 00:00:20.334 > git --version # 'git version 2.39.2' 00:00:20.352 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:20.355 [Pipeline] { 00:00:20.364 [Pipeline] retry 00:00:20.366 [Pipeline] { 00:00:20.383 [Pipeline] sh 00:00:20.671 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:20.945 [Pipeline] } 00:00:20.966 [Pipeline] // retry 00:00:20.972 [Pipeline] } 00:00:20.988 [Pipeline] // withCredentials 00:00:20.998 [Pipeline] httpRequest 00:00:22.708 [Pipeline] echo 00:00:22.710 Sorcerer 10.211.164.101 is alive 00:00:22.720 [Pipeline] retry 00:00:22.723 [Pipeline] { 00:00:22.737 [Pipeline] httpRequest 00:00:22.743 HttpMethod: GET 00:00:22.743 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:22.744 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:22.759 Response Code: HTTP/1.1 200 OK 00:00:22.760 Success: Status code 200 is in the accepted range: 200,404 00:00:22.761 Saving response body to /var/jenkins/workspace/raid-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:43.899 [Pipeline] } 00:00:43.914 [Pipeline] // retry 00:00:43.922 [Pipeline] sh 00:00:44.207 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:45.607 [Pipeline] sh 00:00:45.892 + git -C dpdk log --oneline -n5 00:00:45.892 caf0f5d395 version: 22.11.4 00:00:45.892 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:00:45.892 dc9c799c7d vhost: fix missing spinlock unlock 00:00:45.892 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:00:45.892 6ef77f2a5e net/gve: fix RX buffer size alignment 00:00:45.911 [Pipeline] writeFile 00:00:45.927 [Pipeline] sh 00:00:46.213 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:46.226 [Pipeline] sh 00:00:46.582 + cat autorun-spdk.conf 00:00:46.582 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.582 SPDK_RUN_ASAN=1 00:00:46.582 SPDK_RUN_UBSAN=1 00:00:46.582 SPDK_TEST_RAID=1 00:00:46.582 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:46.582 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:46.582 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:46.605 RUN_NIGHTLY=1 00:00:46.607 [Pipeline] } 00:00:46.621 [Pipeline] // stage 00:00:46.635 [Pipeline] stage 00:00:46.636 [Pipeline] { (Run VM) 00:00:46.647 [Pipeline] sh 00:00:46.937 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:46.937 + echo 'Start stage prepare_nvme.sh' 00:00:46.937 Start stage prepare_nvme.sh 00:00:46.937 + [[ -n 7 ]] 00:00:46.937 + disk_prefix=ex7 00:00:46.937 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:46.937 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:46.937 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:46.937 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:46.937 ++ SPDK_RUN_ASAN=1 00:00:46.937 ++ SPDK_RUN_UBSAN=1 00:00:46.937 ++ SPDK_TEST_RAID=1 00:00:46.937 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:00:46.937 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:00:46.937 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:46.937 ++ RUN_NIGHTLY=1 00:00:46.937 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:46.937 + nvme_files=() 00:00:46.937 + declare -A nvme_files 00:00:46.937 + backend_dir=/var/lib/libvirt/images/backends 00:00:46.937 + nvme_files['nvme.img']=5G 00:00:46.937 + nvme_files['nvme-cmb.img']=5G 00:00:46.937 + nvme_files['nvme-multi0.img']=4G 00:00:46.937 + nvme_files['nvme-multi1.img']=4G 00:00:46.937 + nvme_files['nvme-multi2.img']=4G 00:00:46.937 + nvme_files['nvme-openstack.img']=8G 00:00:46.937 + nvme_files['nvme-zns.img']=5G 00:00:46.937 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:46.937 + (( SPDK_TEST_FTL == 1 )) 00:00:46.937 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:46.937 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:46.937 + for nvme in "${!nvme_files[@]}" 00:00:46.937 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi2.img -s 4G 00:00:46.937 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:46.937 + for nvme in "${!nvme_files[@]}" 00:00:46.937 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-cmb.img -s 5G 00:00:46.937 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:46.937 + for nvme in "${!nvme_files[@]}" 00:00:46.937 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-openstack.img -s 8G 00:00:46.937 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:46.937 + for nvme in "${!nvme_files[@]}" 00:00:46.937 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-zns.img -s 5G 00:00:46.937 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:46.937 + for nvme in "${!nvme_files[@]}" 00:00:46.937 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi1.img -s 4G 00:00:46.937 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:46.937 + for nvme in "${!nvme_files[@]}" 00:00:46.937 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme-multi0.img -s 4G 00:00:46.937 Formatting '/var/lib/libvirt/images/backends/ex7-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:46.937 + for nvme in "${!nvme_files[@]}" 00:00:46.937 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex7-nvme.img -s 5G 00:00:47.237 Formatting '/var/lib/libvirt/images/backends/ex7-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:47.237 ++ sudo grep -rl ex7-nvme.img /etc/libvirt/qemu 00:00:47.237 + echo 'End stage prepare_nvme.sh' 00:00:47.237 End stage prepare_nvme.sh 00:00:47.251 [Pipeline] sh 00:00:47.535 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:47.535 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex7-nvme.img -b /var/lib/libvirt/images/backends/ex7-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img -H -a -v -f fedora39 00:00:47.535 00:00:47.535 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:47.535 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:47.535 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:47.535 HELP=0 00:00:47.535 DRY_RUN=0 00:00:47.535 NVME_FILE=/var/lib/libvirt/images/backends/ex7-nvme.img,/var/lib/libvirt/images/backends/ex7-nvme-multi0.img, 00:00:47.535 NVME_DISKS_TYPE=nvme,nvme, 00:00:47.535 NVME_AUTO_CREATE=0 00:00:47.535 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex7-nvme-multi1.img:/var/lib/libvirt/images/backends/ex7-nvme-multi2.img, 00:00:47.535 NVME_CMB=,, 00:00:47.535 NVME_PMR=,, 00:00:47.535 NVME_ZNS=,, 00:00:47.535 NVME_MS=,, 00:00:47.535 NVME_FDP=,, 00:00:47.535 SPDK_VAGRANT_DISTRO=fedora39 00:00:47.535 SPDK_VAGRANT_VMCPU=10 00:00:47.535 SPDK_VAGRANT_VMRAM=12288 00:00:47.535 SPDK_VAGRANT_PROVIDER=libvirt 00:00:47.535 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:47.535 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:47.535 SPDK_OPENSTACK_NETWORK=0 00:00:47.535 VAGRANT_PACKAGE_BOX=0 00:00:47.535 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:47.535 FORCE_DISTRO=true 00:00:47.535 VAGRANT_BOX_VERSION= 00:00:47.535 EXTRA_VAGRANTFILES= 00:00:47.535 NIC_MODEL=virtio 00:00:47.535 00:00:47.535 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:47.535 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:49.446 Bringing machine 'default' up with 'libvirt' provider... 00:00:50.018 ==> default: Creating image (snapshot of base box volume). 00:00:50.018 ==> default: Creating domain with the following settings... 00:00:50.018 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732743192_7687f96ee8d34045e997 00:00:50.018 ==> default: -- Domain type: kvm 00:00:50.018 ==> default: -- Cpus: 10 00:00:50.018 ==> default: -- Feature: acpi 00:00:50.018 ==> default: -- Feature: apic 00:00:50.018 ==> default: -- Feature: pae 00:00:50.018 ==> default: -- Memory: 12288M 00:00:50.018 ==> default: -- Memory Backing: hugepages: 00:00:50.018 ==> default: -- Management MAC: 00:00:50.018 ==> default: -- Loader: 00:00:50.018 ==> default: -- Nvram: 00:00:50.018 ==> default: -- Base box: spdk/fedora39 00:00:50.018 ==> default: -- Storage pool: default 00:00:50.018 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732743192_7687f96ee8d34045e997.img (20G) 00:00:50.018 ==> default: -- Volume Cache: default 00:00:50.018 ==> default: -- Kernel: 00:00:50.018 ==> default: -- Initrd: 00:00:50.018 ==> default: -- Graphics Type: vnc 00:00:50.018 ==> default: -- Graphics Port: -1 00:00:50.018 ==> default: -- Graphics IP: 127.0.0.1 00:00:50.018 ==> default: -- Graphics Password: Not defined 00:00:50.018 ==> default: -- Video Type: cirrus 00:00:50.018 ==> default: -- Video VRAM: 9216 00:00:50.018 ==> default: -- Sound Type: 00:00:50.018 ==> default: -- Keymap: en-us 00:00:50.018 ==> default: -- TPM Path: 00:00:50.018 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:50.018 ==> default: -- Command line args: 00:00:50.018 ==> default: -> value=-device, 00:00:50.018 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:50.018 ==> default: -> value=-drive, 00:00:50.018 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme.img,if=none,id=nvme-0-drive0, 00:00:50.018 ==> default: -> value=-device, 00:00:50.018 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:50.018 ==> default: -> value=-device, 00:00:50.018 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:50.018 ==> default: -> value=-drive, 00:00:50.018 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:50.018 ==> default: -> value=-device, 00:00:50.018 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:50.018 ==> default: -> value=-drive, 00:00:50.018 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:50.018 ==> default: -> value=-device, 00:00:50.018 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:50.018 ==> default: -> value=-drive, 00:00:50.018 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex7-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:50.018 ==> default: -> value=-device, 00:00:50.018 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:50.280 ==> default: Creating shared folders metadata... 00:00:50.280 ==> default: Starting domain. 00:00:52.191 ==> default: Waiting for domain to get an IP address... 00:01:10.301 ==> default: Waiting for SSH to become available... 00:01:10.301 ==> default: Configuring and enabling network interfaces... 00:01:15.581 default: SSH address: 192.168.121.25:22 00:01:15.581 default: SSH username: vagrant 00:01:15.581 default: SSH auth method: private key 00:01:18.128 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:26.258 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:31.532 ==> default: Mounting SSHFS shared folder... 00:01:33.439 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:33.439 ==> default: Checking Mount.. 00:01:35.349 ==> default: Folder Successfully Mounted! 00:01:35.349 ==> default: Running provisioner: file... 00:01:36.286 default: ~/.gitconfig => .gitconfig 00:01:36.617 00:01:36.617 SUCCESS! 00:01:36.617 00:01:36.617 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:36.617 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:36.617 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:36.617 00:01:36.643 [Pipeline] } 00:01:36.657 [Pipeline] // stage 00:01:36.666 [Pipeline] dir 00:01:36.666 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:36.668 [Pipeline] { 00:01:36.681 [Pipeline] catchError 00:01:36.682 [Pipeline] { 00:01:36.696 [Pipeline] sh 00:01:36.979 + vagrant ssh-config --host vagrant 00:01:36.979 + sed -ne /^Host/,$p 00:01:36.979 + tee ssh_conf 00:01:39.518 Host vagrant 00:01:39.518 HostName 192.168.121.25 00:01:39.518 User vagrant 00:01:39.518 Port 22 00:01:39.518 UserKnownHostsFile /dev/null 00:01:39.518 StrictHostKeyChecking no 00:01:39.518 PasswordAuthentication no 00:01:39.518 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:39.518 IdentitiesOnly yes 00:01:39.518 LogLevel FATAL 00:01:39.518 ForwardAgent yes 00:01:39.518 ForwardX11 yes 00:01:39.518 00:01:39.533 [Pipeline] withEnv 00:01:39.536 [Pipeline] { 00:01:39.549 [Pipeline] sh 00:01:39.832 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:39.832 source /etc/os-release 00:01:39.832 [[ -e /image.version ]] && img=$(< /image.version) 00:01:39.832 # Minimal, systemd-like check. 00:01:39.832 if [[ -e /.dockerenv ]]; then 00:01:39.832 # Clear garbage from the node's name: 00:01:39.832 # agt-er_autotest_547-896 -> autotest_547-896 00:01:39.832 # $HOSTNAME is the actual container id 00:01:39.832 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:39.832 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:39.832 # We can assume this is a mount from a host where container is running, 00:01:39.832 # so fetch its hostname to easily identify the target swarm worker. 00:01:39.832 container="$(< /etc/hostname) ($agent)" 00:01:39.832 else 00:01:39.832 # Fallback 00:01:39.832 container=$agent 00:01:39.832 fi 00:01:39.832 fi 00:01:39.832 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:39.832 00:01:40.103 [Pipeline] } 00:01:40.118 [Pipeline] // withEnv 00:01:40.126 [Pipeline] setCustomBuildProperty 00:01:40.140 [Pipeline] stage 00:01:40.142 [Pipeline] { (Tests) 00:01:40.158 [Pipeline] sh 00:01:40.442 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:40.716 [Pipeline] sh 00:01:40.999 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:41.273 [Pipeline] timeout 00:01:41.274 Timeout set to expire in 1 hr 30 min 00:01:41.275 [Pipeline] { 00:01:41.289 [Pipeline] sh 00:01:41.575 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:42.146 HEAD is now at 35cd3e84d bdev/part: Pass through dif_check_flags via dif_check_flags_exclude_mask 00:01:42.158 [Pipeline] sh 00:01:42.442 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:42.718 [Pipeline] sh 00:01:43.003 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:43.281 [Pipeline] sh 00:01:43.571 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:43.831 ++ readlink -f spdk_repo 00:01:43.831 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:43.831 + [[ -n /home/vagrant/spdk_repo ]] 00:01:43.831 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:43.831 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:43.831 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:43.831 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:43.831 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:43.831 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:43.831 + cd /home/vagrant/spdk_repo 00:01:43.831 + source /etc/os-release 00:01:43.831 ++ NAME='Fedora Linux' 00:01:43.831 ++ VERSION='39 (Cloud Edition)' 00:01:43.831 ++ ID=fedora 00:01:43.831 ++ VERSION_ID=39 00:01:43.831 ++ VERSION_CODENAME= 00:01:43.831 ++ PLATFORM_ID=platform:f39 00:01:43.831 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:43.831 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:43.831 ++ LOGO=fedora-logo-icon 00:01:43.831 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:43.831 ++ HOME_URL=https://fedoraproject.org/ 00:01:43.831 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:43.831 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:43.831 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:43.831 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:43.831 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:43.831 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:43.831 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:43.831 ++ SUPPORT_END=2024-11-12 00:01:43.831 ++ VARIANT='Cloud Edition' 00:01:43.831 ++ VARIANT_ID=cloud 00:01:43.831 + uname -a 00:01:43.831 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:43.831 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:44.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:44.401 Hugepages 00:01:44.401 node hugesize free / total 00:01:44.401 node0 1048576kB 0 / 0 00:01:44.401 node0 2048kB 0 / 0 00:01:44.401 00:01:44.401 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:44.401 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:44.401 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:44.401 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:44.401 + rm -f /tmp/spdk-ld-path 00:01:44.401 + source autorun-spdk.conf 00:01:44.401 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.401 ++ SPDK_RUN_ASAN=1 00:01:44.401 ++ SPDK_RUN_UBSAN=1 00:01:44.401 ++ SPDK_TEST_RAID=1 00:01:44.401 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:44.401 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:44.401 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:44.401 ++ RUN_NIGHTLY=1 00:01:44.401 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:44.401 + [[ -n '' ]] 00:01:44.401 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:44.662 + for M in /var/spdk/build-*-manifest.txt 00:01:44.662 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:44.662 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:44.662 + for M in /var/spdk/build-*-manifest.txt 00:01:44.662 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:44.662 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:44.662 + for M in /var/spdk/build-*-manifest.txt 00:01:44.662 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:44.662 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:44.662 ++ uname 00:01:44.662 + [[ Linux == \L\i\n\u\x ]] 00:01:44.662 + sudo dmesg -T 00:01:44.662 + sudo dmesg --clear 00:01:44.662 + dmesg_pid=6155 00:01:44.662 + [[ Fedora Linux == FreeBSD ]] 00:01:44.662 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:44.662 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:44.662 + sudo dmesg -Tw 00:01:44.662 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:44.662 + [[ -x /usr/src/fio-static/fio ]] 00:01:44.662 + export FIO_BIN=/usr/src/fio-static/fio 00:01:44.662 + FIO_BIN=/usr/src/fio-static/fio 00:01:44.662 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:44.662 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:44.662 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:44.662 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:44.662 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:44.662 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:44.662 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:44.662 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:44.662 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:44.923 21:34:07 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:44.923 21:34:07 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:44.923 21:34:07 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.923 21:34:07 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:44.923 21:34:07 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:44.923 21:34:07 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:44.923 21:34:07 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:44.923 21:34:07 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:44.923 21:34:07 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:44.923 21:34:07 -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=1 00:01:44.923 21:34:07 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:44.923 21:34:07 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:44.923 21:34:07 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:44.923 21:34:07 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:44.923 21:34:07 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:44.923 21:34:07 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:44.923 21:34:07 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:44.923 21:34:07 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:44.923 21:34:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.923 21:34:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.923 21:34:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.923 21:34:07 -- paths/export.sh@5 -- $ export PATH 00:01:44.923 21:34:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.923 21:34:07 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:44.923 21:34:07 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:44.923 21:34:07 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732743247.XXXXXX 00:01:44.923 21:34:07 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732743247.xWRvF5 00:01:44.923 21:34:07 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:44.923 21:34:07 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:01:44.923 21:34:07 -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:44.923 21:34:07 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:01:44.923 21:34:07 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:44.923 21:34:07 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:44.923 21:34:07 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:44.923 21:34:07 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:44.923 21:34:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.923 21:34:07 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:01:44.923 21:34:07 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:44.923 21:34:07 -- pm/common@17 -- $ local monitor 00:01:44.923 21:34:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.923 21:34:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.923 21:34:07 -- pm/common@25 -- $ sleep 1 00:01:44.923 21:34:07 -- pm/common@21 -- $ date +%s 00:01:44.923 21:34:07 -- pm/common@21 -- $ date +%s 00:01:44.923 21:34:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732743247 00:01:44.923 21:34:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732743247 00:01:44.923 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732743247_collect-cpu-load.pm.log 00:01:44.923 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732743247_collect-vmstat.pm.log 00:01:45.864 21:34:08 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:45.864 21:34:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:45.864 21:34:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:45.864 21:34:08 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:45.864 21:34:08 -- spdk/autobuild.sh@16 -- $ date -u 00:01:45.864 Wed Nov 27 09:34:08 PM UTC 2024 00:01:45.864 21:34:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:45.864 v25.01-pre-276-g35cd3e84d 00:01:45.864 21:34:08 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:45.864 21:34:08 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:45.864 21:34:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:45.864 21:34:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:45.864 21:34:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.124 ************************************ 00:01:46.124 START TEST asan 00:01:46.124 ************************************ 00:01:46.124 using asan 00:01:46.124 21:34:08 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:46.125 00:01:46.125 real 0m0.001s 00:01:46.125 user 0m0.000s 00:01:46.125 sys 0m0.000s 00:01:46.125 21:34:08 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:46.125 21:34:08 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:46.125 ************************************ 00:01:46.125 END TEST asan 00:01:46.125 ************************************ 00:01:46.125 21:34:09 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:46.125 21:34:09 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:46.125 21:34:09 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:46.125 21:34:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:46.125 21:34:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.125 ************************************ 00:01:46.125 START TEST ubsan 00:01:46.125 ************************************ 00:01:46.125 using ubsan 00:01:46.125 21:34:09 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:46.125 00:01:46.125 real 0m0.000s 00:01:46.125 user 0m0.000s 00:01:46.125 sys 0m0.000s 00:01:46.125 21:34:09 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:46.125 21:34:09 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:46.125 ************************************ 00:01:46.125 END TEST ubsan 00:01:46.125 ************************************ 00:01:46.125 21:34:09 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:46.125 21:34:09 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:46.125 21:34:09 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:46.125 21:34:09 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:01:46.125 21:34:09 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:46.125 21:34:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.125 ************************************ 00:01:46.125 START TEST build_native_dpdk 00:01:46.125 ************************************ 00:01:46.125 21:34:09 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:01:46.125 caf0f5d395 version: 22.11.4 00:01:46.125 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:46.125 dc9c799c7d vhost: fix missing spinlock unlock 00:01:46.125 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:46.125 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:01:46.125 patching file config/rte_config.h 00:01:46.125 Hunk #1 succeeded at 60 (offset 1 line). 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:01:46.125 patching file lib/pcapng/rte_pcapng.c 00:01:46.125 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:46.125 21:34:09 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:01:46.125 21:34:09 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:01:46.126 21:34:09 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:01:46.126 21:34:09 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:01:46.385 21:34:09 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:01:46.385 21:34:09 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:01:46.385 21:34:09 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:01:46.385 21:34:09 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:51.663 The Meson build system 00:01:51.663 Version: 1.5.0 00:01:51.663 Source dir: /home/vagrant/spdk_repo/dpdk 00:01:51.663 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:01:51.663 Build type: native build 00:01:51.663 Program cat found: YES (/usr/bin/cat) 00:01:51.663 Project name: DPDK 00:01:51.663 Project version: 22.11.4 00:01:51.663 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:51.663 C linker for the host machine: gcc ld.bfd 2.40-14 00:01:51.663 Host machine cpu family: x86_64 00:01:51.663 Host machine cpu: x86_64 00:01:51.663 Message: ## Building in Developer Mode ## 00:01:51.663 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:51.663 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:01:51.663 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:01:51.663 Program objdump found: YES (/usr/bin/objdump) 00:01:51.663 Program python3 found: YES (/usr/bin/python3) 00:01:51.663 Program cat found: YES (/usr/bin/cat) 00:01:51.663 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:51.663 Checking for size of "void *" : 8 00:01:51.663 Checking for size of "void *" : 8 (cached) 00:01:51.663 Library m found: YES 00:01:51.663 Library numa found: YES 00:01:51.663 Has header "numaif.h" : YES 00:01:51.663 Library fdt found: NO 00:01:51.663 Library execinfo found: NO 00:01:51.663 Has header "execinfo.h" : YES 00:01:51.663 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:51.663 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:51.663 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:51.663 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:51.663 Run-time dependency openssl found: YES 3.1.1 00:01:51.663 Run-time dependency libpcap found: YES 1.10.4 00:01:51.663 Has header "pcap.h" with dependency libpcap: YES 00:01:51.663 Compiler for C supports arguments -Wcast-qual: YES 00:01:51.663 Compiler for C supports arguments -Wdeprecated: YES 00:01:51.663 Compiler for C supports arguments -Wformat: YES 00:01:51.663 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:51.663 Compiler for C supports arguments -Wformat-security: NO 00:01:51.663 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:51.663 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:51.663 Compiler for C supports arguments -Wnested-externs: YES 00:01:51.663 Compiler for C supports arguments -Wold-style-definition: YES 00:01:51.663 Compiler for C supports arguments -Wpointer-arith: YES 00:01:51.663 Compiler for C supports arguments -Wsign-compare: YES 00:01:51.663 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:51.663 Compiler for C supports arguments -Wundef: YES 00:01:51.663 Compiler for C supports arguments -Wwrite-strings: YES 00:01:51.663 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:51.663 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:51.663 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:51.663 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:51.663 Compiler for C supports arguments -mavx512f: YES 00:01:51.663 Checking if "AVX512 checking" compiles: YES 00:01:51.663 Fetching value of define "__SSE4_2__" : 1 00:01:51.663 Fetching value of define "__AES__" : 1 00:01:51.663 Fetching value of define "__AVX__" : 1 00:01:51.663 Fetching value of define "__AVX2__" : 1 00:01:51.663 Fetching value of define "__AVX512BW__" : 1 00:01:51.663 Fetching value of define "__AVX512CD__" : 1 00:01:51.663 Fetching value of define "__AVX512DQ__" : 1 00:01:51.663 Fetching value of define "__AVX512F__" : 1 00:01:51.663 Fetching value of define "__AVX512VL__" : 1 00:01:51.663 Fetching value of define "__PCLMUL__" : 1 00:01:51.663 Fetching value of define "__RDRND__" : 1 00:01:51.663 Fetching value of define "__RDSEED__" : 1 00:01:51.663 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:51.663 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:51.663 Message: lib/kvargs: Defining dependency "kvargs" 00:01:51.663 Message: lib/telemetry: Defining dependency "telemetry" 00:01:51.663 Checking for function "getentropy" : YES 00:01:51.663 Message: lib/eal: Defining dependency "eal" 00:01:51.663 Message: lib/ring: Defining dependency "ring" 00:01:51.663 Message: lib/rcu: Defining dependency "rcu" 00:01:51.663 Message: lib/mempool: Defining dependency "mempool" 00:01:51.663 Message: lib/mbuf: Defining dependency "mbuf" 00:01:51.663 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:51.663 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:51.663 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:51.663 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:51.663 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:51.663 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:51.663 Compiler for C supports arguments -mpclmul: YES 00:01:51.663 Compiler for C supports arguments -maes: YES 00:01:51.663 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:51.663 Compiler for C supports arguments -mavx512bw: YES 00:01:51.663 Compiler for C supports arguments -mavx512dq: YES 00:01:51.663 Compiler for C supports arguments -mavx512vl: YES 00:01:51.663 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:51.663 Compiler for C supports arguments -mavx2: YES 00:01:51.663 Compiler for C supports arguments -mavx: YES 00:01:51.663 Message: lib/net: Defining dependency "net" 00:01:51.663 Message: lib/meter: Defining dependency "meter" 00:01:51.663 Message: lib/ethdev: Defining dependency "ethdev" 00:01:51.663 Message: lib/pci: Defining dependency "pci" 00:01:51.663 Message: lib/cmdline: Defining dependency "cmdline" 00:01:51.663 Message: lib/metrics: Defining dependency "metrics" 00:01:51.663 Message: lib/hash: Defining dependency "hash" 00:01:51.663 Message: lib/timer: Defining dependency "timer" 00:01:51.663 Fetching value of define "__AVX2__" : 1 (cached) 00:01:51.663 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:51.663 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:51.663 Fetching value of define "__AVX512CD__" : 1 (cached) 00:01:51.663 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:51.663 Message: lib/acl: Defining dependency "acl" 00:01:51.663 Message: lib/bbdev: Defining dependency "bbdev" 00:01:51.663 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:51.663 Run-time dependency libelf found: YES 0.191 00:01:51.663 Message: lib/bpf: Defining dependency "bpf" 00:01:51.663 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:51.663 Message: lib/compressdev: Defining dependency "compressdev" 00:01:51.663 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:51.663 Message: lib/distributor: Defining dependency "distributor" 00:01:51.663 Message: lib/efd: Defining dependency "efd" 00:01:51.663 Message: lib/eventdev: Defining dependency "eventdev" 00:01:51.663 Message: lib/gpudev: Defining dependency "gpudev" 00:01:51.663 Message: lib/gro: Defining dependency "gro" 00:01:51.663 Message: lib/gso: Defining dependency "gso" 00:01:51.663 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:51.663 Message: lib/jobstats: Defining dependency "jobstats" 00:01:51.663 Message: lib/latencystats: Defining dependency "latencystats" 00:01:51.663 Message: lib/lpm: Defining dependency "lpm" 00:01:51.663 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:51.663 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:51.663 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:51.663 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:51.663 Message: lib/member: Defining dependency "member" 00:01:51.663 Message: lib/pcapng: Defining dependency "pcapng" 00:01:51.663 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:51.663 Message: lib/power: Defining dependency "power" 00:01:51.663 Message: lib/rawdev: Defining dependency "rawdev" 00:01:51.663 Message: lib/regexdev: Defining dependency "regexdev" 00:01:51.663 Message: lib/dmadev: Defining dependency "dmadev" 00:01:51.663 Message: lib/rib: Defining dependency "rib" 00:01:51.663 Message: lib/reorder: Defining dependency "reorder" 00:01:51.663 Message: lib/sched: Defining dependency "sched" 00:01:51.663 Message: lib/security: Defining dependency "security" 00:01:51.663 Message: lib/stack: Defining dependency "stack" 00:01:51.663 Has header "linux/userfaultfd.h" : YES 00:01:51.663 Message: lib/vhost: Defining dependency "vhost" 00:01:51.663 Message: lib/ipsec: Defining dependency "ipsec" 00:01:51.663 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:51.663 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:51.663 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:51.663 Message: lib/fib: Defining dependency "fib" 00:01:51.663 Message: lib/port: Defining dependency "port" 00:01:51.663 Message: lib/pdump: Defining dependency "pdump" 00:01:51.663 Message: lib/table: Defining dependency "table" 00:01:51.663 Message: lib/pipeline: Defining dependency "pipeline" 00:01:51.663 Message: lib/graph: Defining dependency "graph" 00:01:51.663 Message: lib/node: Defining dependency "node" 00:01:51.663 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:51.663 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:51.663 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:51.663 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:51.663 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:51.663 Compiler for C supports arguments -Wno-unused-value: YES 00:01:51.663 Compiler for C supports arguments -Wno-format: YES 00:01:51.663 Compiler for C supports arguments -Wno-format-security: YES 00:01:51.663 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:51.663 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:53.046 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:53.046 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:53.046 Fetching value of define "__AVX2__" : 1 (cached) 00:01:53.046 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:53.046 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:53.046 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:53.046 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:53.046 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:53.046 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:53.046 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:53.046 Configuring doxy-api.conf using configuration 00:01:53.046 Program sphinx-build found: NO 00:01:53.046 Configuring rte_build_config.h using configuration 00:01:53.046 Message: 00:01:53.046 ================= 00:01:53.046 Applications Enabled 00:01:53.046 ================= 00:01:53.046 00:01:53.046 apps: 00:01:53.046 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:53.046 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:53.046 test-security-perf, 00:01:53.046 00:01:53.046 Message: 00:01:53.046 ================= 00:01:53.046 Libraries Enabled 00:01:53.046 ================= 00:01:53.046 00:01:53.046 libs: 00:01:53.046 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:53.046 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:53.046 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:53.046 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:53.046 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:53.046 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:53.046 table, pipeline, graph, node, 00:01:53.046 00:01:53.046 Message: 00:01:53.046 =============== 00:01:53.046 Drivers Enabled 00:01:53.046 =============== 00:01:53.046 00:01:53.046 common: 00:01:53.046 00:01:53.046 bus: 00:01:53.046 pci, vdev, 00:01:53.046 mempool: 00:01:53.046 ring, 00:01:53.046 dma: 00:01:53.046 00:01:53.046 net: 00:01:53.046 i40e, 00:01:53.046 raw: 00:01:53.046 00:01:53.046 crypto: 00:01:53.046 00:01:53.046 compress: 00:01:53.046 00:01:53.046 regex: 00:01:53.046 00:01:53.046 vdpa: 00:01:53.046 00:01:53.046 event: 00:01:53.046 00:01:53.046 baseband: 00:01:53.046 00:01:53.046 gpu: 00:01:53.046 00:01:53.046 00:01:53.046 Message: 00:01:53.046 ================= 00:01:53.046 Content Skipped 00:01:53.046 ================= 00:01:53.046 00:01:53.046 apps: 00:01:53.046 00:01:53.046 libs: 00:01:53.046 kni: explicitly disabled via build config (deprecated lib) 00:01:53.046 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:53.046 00:01:53.046 drivers: 00:01:53.046 common/cpt: not in enabled drivers build config 00:01:53.046 common/dpaax: not in enabled drivers build config 00:01:53.046 common/iavf: not in enabled drivers build config 00:01:53.046 common/idpf: not in enabled drivers build config 00:01:53.046 common/mvep: not in enabled drivers build config 00:01:53.046 common/octeontx: not in enabled drivers build config 00:01:53.046 bus/auxiliary: not in enabled drivers build config 00:01:53.046 bus/dpaa: not in enabled drivers build config 00:01:53.046 bus/fslmc: not in enabled drivers build config 00:01:53.046 bus/ifpga: not in enabled drivers build config 00:01:53.046 bus/vmbus: not in enabled drivers build config 00:01:53.046 common/cnxk: not in enabled drivers build config 00:01:53.046 common/mlx5: not in enabled drivers build config 00:01:53.046 common/qat: not in enabled drivers build config 00:01:53.046 common/sfc_efx: not in enabled drivers build config 00:01:53.046 mempool/bucket: not in enabled drivers build config 00:01:53.046 mempool/cnxk: not in enabled drivers build config 00:01:53.046 mempool/dpaa: not in enabled drivers build config 00:01:53.046 mempool/dpaa2: not in enabled drivers build config 00:01:53.046 mempool/octeontx: not in enabled drivers build config 00:01:53.046 mempool/stack: not in enabled drivers build config 00:01:53.046 dma/cnxk: not in enabled drivers build config 00:01:53.046 dma/dpaa: not in enabled drivers build config 00:01:53.046 dma/dpaa2: not in enabled drivers build config 00:01:53.046 dma/hisilicon: not in enabled drivers build config 00:01:53.046 dma/idxd: not in enabled drivers build config 00:01:53.046 dma/ioat: not in enabled drivers build config 00:01:53.046 dma/skeleton: not in enabled drivers build config 00:01:53.046 net/af_packet: not in enabled drivers build config 00:01:53.046 net/af_xdp: not in enabled drivers build config 00:01:53.046 net/ark: not in enabled drivers build config 00:01:53.046 net/atlantic: not in enabled drivers build config 00:01:53.046 net/avp: not in enabled drivers build config 00:01:53.046 net/axgbe: not in enabled drivers build config 00:01:53.046 net/bnx2x: not in enabled drivers build config 00:01:53.046 net/bnxt: not in enabled drivers build config 00:01:53.046 net/bonding: not in enabled drivers build config 00:01:53.046 net/cnxk: not in enabled drivers build config 00:01:53.046 net/cxgbe: not in enabled drivers build config 00:01:53.046 net/dpaa: not in enabled drivers build config 00:01:53.046 net/dpaa2: not in enabled drivers build config 00:01:53.046 net/e1000: not in enabled drivers build config 00:01:53.046 net/ena: not in enabled drivers build config 00:01:53.046 net/enetc: not in enabled drivers build config 00:01:53.046 net/enetfec: not in enabled drivers build config 00:01:53.046 net/enic: not in enabled drivers build config 00:01:53.046 net/failsafe: not in enabled drivers build config 00:01:53.046 net/fm10k: not in enabled drivers build config 00:01:53.046 net/gve: not in enabled drivers build config 00:01:53.046 net/hinic: not in enabled drivers build config 00:01:53.046 net/hns3: not in enabled drivers build config 00:01:53.046 net/iavf: not in enabled drivers build config 00:01:53.046 net/ice: not in enabled drivers build config 00:01:53.046 net/idpf: not in enabled drivers build config 00:01:53.046 net/igc: not in enabled drivers build config 00:01:53.046 net/ionic: not in enabled drivers build config 00:01:53.046 net/ipn3ke: not in enabled drivers build config 00:01:53.046 net/ixgbe: not in enabled drivers build config 00:01:53.046 net/kni: not in enabled drivers build config 00:01:53.046 net/liquidio: not in enabled drivers build config 00:01:53.046 net/mana: not in enabled drivers build config 00:01:53.046 net/memif: not in enabled drivers build config 00:01:53.046 net/mlx4: not in enabled drivers build config 00:01:53.046 net/mlx5: not in enabled drivers build config 00:01:53.046 net/mvneta: not in enabled drivers build config 00:01:53.046 net/mvpp2: not in enabled drivers build config 00:01:53.046 net/netvsc: not in enabled drivers build config 00:01:53.046 net/nfb: not in enabled drivers build config 00:01:53.046 net/nfp: not in enabled drivers build config 00:01:53.046 net/ngbe: not in enabled drivers build config 00:01:53.046 net/null: not in enabled drivers build config 00:01:53.046 net/octeontx: not in enabled drivers build config 00:01:53.046 net/octeon_ep: not in enabled drivers build config 00:01:53.046 net/pcap: not in enabled drivers build config 00:01:53.046 net/pfe: not in enabled drivers build config 00:01:53.047 net/qede: not in enabled drivers build config 00:01:53.047 net/ring: not in enabled drivers build config 00:01:53.047 net/sfc: not in enabled drivers build config 00:01:53.047 net/softnic: not in enabled drivers build config 00:01:53.047 net/tap: not in enabled drivers build config 00:01:53.047 net/thunderx: not in enabled drivers build config 00:01:53.047 net/txgbe: not in enabled drivers build config 00:01:53.047 net/vdev_netvsc: not in enabled drivers build config 00:01:53.047 net/vhost: not in enabled drivers build config 00:01:53.047 net/virtio: not in enabled drivers build config 00:01:53.047 net/vmxnet3: not in enabled drivers build config 00:01:53.047 raw/cnxk_bphy: not in enabled drivers build config 00:01:53.047 raw/cnxk_gpio: not in enabled drivers build config 00:01:53.047 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:53.047 raw/ifpga: not in enabled drivers build config 00:01:53.047 raw/ntb: not in enabled drivers build config 00:01:53.047 raw/skeleton: not in enabled drivers build config 00:01:53.047 crypto/armv8: not in enabled drivers build config 00:01:53.047 crypto/bcmfs: not in enabled drivers build config 00:01:53.047 crypto/caam_jr: not in enabled drivers build config 00:01:53.047 crypto/ccp: not in enabled drivers build config 00:01:53.047 crypto/cnxk: not in enabled drivers build config 00:01:53.047 crypto/dpaa_sec: not in enabled drivers build config 00:01:53.047 crypto/dpaa2_sec: not in enabled drivers build config 00:01:53.047 crypto/ipsec_mb: not in enabled drivers build config 00:01:53.047 crypto/mlx5: not in enabled drivers build config 00:01:53.047 crypto/mvsam: not in enabled drivers build config 00:01:53.047 crypto/nitrox: not in enabled drivers build config 00:01:53.047 crypto/null: not in enabled drivers build config 00:01:53.047 crypto/octeontx: not in enabled drivers build config 00:01:53.047 crypto/openssl: not in enabled drivers build config 00:01:53.047 crypto/scheduler: not in enabled drivers build config 00:01:53.047 crypto/uadk: not in enabled drivers build config 00:01:53.047 crypto/virtio: not in enabled drivers build config 00:01:53.047 compress/isal: not in enabled drivers build config 00:01:53.047 compress/mlx5: not in enabled drivers build config 00:01:53.047 compress/octeontx: not in enabled drivers build config 00:01:53.047 compress/zlib: not in enabled drivers build config 00:01:53.047 regex/mlx5: not in enabled drivers build config 00:01:53.047 regex/cn9k: not in enabled drivers build config 00:01:53.047 vdpa/ifc: not in enabled drivers build config 00:01:53.047 vdpa/mlx5: not in enabled drivers build config 00:01:53.047 vdpa/sfc: not in enabled drivers build config 00:01:53.047 event/cnxk: not in enabled drivers build config 00:01:53.047 event/dlb2: not in enabled drivers build config 00:01:53.047 event/dpaa: not in enabled drivers build config 00:01:53.047 event/dpaa2: not in enabled drivers build config 00:01:53.047 event/dsw: not in enabled drivers build config 00:01:53.047 event/opdl: not in enabled drivers build config 00:01:53.047 event/skeleton: not in enabled drivers build config 00:01:53.047 event/sw: not in enabled drivers build config 00:01:53.047 event/octeontx: not in enabled drivers build config 00:01:53.047 baseband/acc: not in enabled drivers build config 00:01:53.047 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:53.047 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:53.047 baseband/la12xx: not in enabled drivers build config 00:01:53.047 baseband/null: not in enabled drivers build config 00:01:53.047 baseband/turbo_sw: not in enabled drivers build config 00:01:53.047 gpu/cuda: not in enabled drivers build config 00:01:53.047 00:01:53.047 00:01:53.047 Build targets in project: 311 00:01:53.047 00:01:53.047 DPDK 22.11.4 00:01:53.047 00:01:53.047 User defined options 00:01:53.047 libdir : lib 00:01:53.047 prefix : /home/vagrant/spdk_repo/dpdk/build 00:01:53.047 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:53.047 c_link_args : 00:01:53.047 enable_docs : false 00:01:53.047 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:01:53.047 enable_kmods : false 00:01:53.047 machine : native 00:01:53.047 tests : false 00:01:53.047 00:01:53.047 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.047 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:53.047 21:34:16 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:01:53.306 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:01:53.306 [1/740] Generating lib/rte_telemetry_mingw with a custom command 00:01:53.306 [2/740] Generating lib/rte_kvargs_def with a custom command 00:01:53.306 [3/740] Generating lib/rte_kvargs_mingw with a custom command 00:01:53.306 [4/740] Generating lib/rte_telemetry_def with a custom command 00:01:53.306 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:53.306 [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:53.306 [7/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:53.306 [8/740] Linking static target lib/librte_kvargs.a 00:01:53.306 [9/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:53.307 [10/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:53.307 [11/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:53.307 [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:53.307 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:53.307 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:53.307 [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:53.566 [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:53.566 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:53.566 [18/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:53.566 [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:53.566 [20/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.566 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:53.566 [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:53.566 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:53.566 [24/740] Linking target lib/librte_kvargs.so.23.0 00:01:53.566 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:53.566 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:53.566 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:53.566 [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:53.566 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:53.566 [30/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:53.566 [31/740] Linking static target lib/librte_telemetry.a 00:01:53.566 [32/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:53.825 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:53.825 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:53.825 [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:53.825 [36/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:53.825 [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:53.825 [38/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:53.825 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:53.825 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:53.825 [41/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:54.085 [42/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:54.085 [43/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.085 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:54.085 [45/740] Linking target lib/librte_telemetry.so.23.0 00:01:54.085 [46/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:54.085 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:54.085 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:54.085 [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:54.085 [50/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:54.085 [51/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:54.085 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:54.085 [53/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:54.085 [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:54.085 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:54.085 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:54.085 [57/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:54.085 [58/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:54.085 [59/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:54.344 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:54.344 [61/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:54.344 [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:54.344 [63/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:54.344 [64/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:54.344 [65/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:54.344 [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:54.344 [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:54.344 [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:54.344 [69/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:54.344 [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:54.344 [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:54.344 [72/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:54.344 [73/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:54.344 [74/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:54.344 [75/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:54.344 [76/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:54.344 [77/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:54.344 [78/740] Generating lib/rte_eal_def with a custom command 00:01:54.344 [79/740] Generating lib/rte_eal_mingw with a custom command 00:01:54.344 [80/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:54.344 [81/740] Generating lib/rte_ring_def with a custom command 00:01:54.344 [82/740] Generating lib/rte_ring_mingw with a custom command 00:01:54.602 [83/740] Generating lib/rte_rcu_def with a custom command 00:01:54.602 [84/740] Generating lib/rte_rcu_mingw with a custom command 00:01:54.602 [85/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:54.602 [86/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:54.602 [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:54.602 [88/740] Linking static target lib/librte_ring.a 00:01:54.602 [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:54.602 [90/740] Generating lib/rte_mempool_def with a custom command 00:01:54.602 [91/740] Generating lib/rte_mempool_mingw with a custom command 00:01:54.602 [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:54.602 [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:54.861 [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.861 [95/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:54.861 [96/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:54.861 [97/740] Generating lib/rte_mbuf_def with a custom command 00:01:54.861 [98/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:54.861 [99/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:54.861 [100/740] Generating lib/rte_mbuf_mingw with a custom command 00:01:54.861 [101/740] Linking static target lib/librte_eal.a 00:01:55.120 [102/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:55.120 [103/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:55.120 [104/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:55.120 [105/740] Linking static target lib/librte_rcu.a 00:01:55.120 [106/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:55.120 [107/740] Linking static target lib/librte_mempool.a 00:01:55.120 [108/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:55.380 [109/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:55.380 [110/740] Generating lib/rte_net_def with a custom command 00:01:55.380 [111/740] Generating lib/rte_net_mingw with a custom command 00:01:55.380 [112/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:55.380 [113/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.380 [114/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:55.380 [115/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.380 [116/740] Generating lib/rte_meter_def with a custom command 00:01:55.380 [117/740] Generating lib/rte_meter_mingw with a custom command 00:01:55.380 [118/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:55.380 [119/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:55.380 [120/740] Linking static target lib/librte_meter.a 00:01:55.380 [121/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.639 [122/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:55.639 [123/740] Linking static target lib/librte_net.a 00:01:55.639 [124/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.639 [125/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.639 [126/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.639 [127/740] Linking static target lib/librte_mbuf.a 00:01:55.923 [128/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:55.923 [129/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.923 [130/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.923 [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.923 [132/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:55.923 [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:56.195 [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:56.195 [135/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.195 [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:56.195 [137/740] Generating lib/rte_ethdev_def with a custom command 00:01:56.195 [138/740] Generating lib/rte_ethdev_mingw with a custom command 00:01:56.195 [139/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:56.195 [140/740] Generating lib/rte_pci_def with a custom command 00:01:56.454 [141/740] Generating lib/rte_pci_mingw with a custom command 00:01:56.454 [142/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:56.454 [143/740] Linking static target lib/librte_pci.a 00:01:56.454 [144/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:56.454 [145/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:56.455 [146/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:56.455 [147/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:56.455 [148/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.455 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:56.455 [150/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:56.713 [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.713 [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:56.713 [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.713 [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:56.713 [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.713 [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.713 [157/740] Generating lib/rte_cmdline_def with a custom command 00:01:56.713 [158/740] Generating lib/rte_cmdline_mingw with a custom command 00:01:56.713 [159/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:56.713 [160/740] Generating lib/rte_metrics_def with a custom command 00:01:56.713 [161/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:56.713 [162/740] Generating lib/rte_metrics_mingw with a custom command 00:01:56.713 [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:56.713 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:56.713 [165/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:56.713 [166/740] Generating lib/rte_hash_def with a custom command 00:01:56.713 [167/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:56.713 [168/740] Generating lib/rte_hash_mingw with a custom command 00:01:56.713 [169/740] Linking static target lib/librte_cmdline.a 00:01:56.973 [170/740] Generating lib/rte_timer_def with a custom command 00:01:56.973 [171/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:56.973 [172/740] Generating lib/rte_timer_mingw with a custom command 00:01:56.973 [173/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:56.973 [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:56.973 [175/740] Linking static target lib/librte_metrics.a 00:01:57.232 [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:57.232 [177/740] Linking static target lib/librte_timer.a 00:01:57.232 [178/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.491 [179/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:57.491 [180/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:57.491 [181/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.491 [182/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:57.491 [183/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.491 [184/740] Generating lib/rte_acl_def with a custom command 00:01:57.491 [185/740] Generating lib/rte_acl_mingw with a custom command 00:01:57.750 [186/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:57.750 [187/740] Generating lib/rte_bbdev_def with a custom command 00:01:57.750 [188/740] Generating lib/rte_bbdev_mingw with a custom command 00:01:57.750 [189/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:57.750 [190/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:57.750 [191/740] Generating lib/rte_bitratestats_def with a custom command 00:01:57.750 [192/740] Linking static target lib/librte_ethdev.a 00:01:57.750 [193/740] Generating lib/rte_bitratestats_mingw with a custom command 00:01:58.009 [194/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:58.009 [195/740] Linking static target lib/librte_bitratestats.a 00:01:58.268 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:58.268 [197/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:58.268 [198/740] Linking static target lib/librte_bbdev.a 00:01:58.268 [199/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:58.268 [200/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.527 [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:58.786 [202/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:58.786 [203/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.786 [204/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:58.786 [205/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:58.786 [206/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:59.045 [207/740] Linking static target lib/librte_hash.a 00:01:59.045 [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:59.303 [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:59.303 [210/740] Generating lib/rte_bpf_def with a custom command 00:01:59.303 [211/740] Generating lib/rte_bpf_mingw with a custom command 00:01:59.303 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:59.303 [213/740] Generating lib/rte_cfgfile_def with a custom command 00:01:59.303 [214/740] Generating lib/rte_cfgfile_mingw with a custom command 00:01:59.562 [215/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.562 [216/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:01:59.562 [217/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:59.562 [218/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:59.562 [219/740] Linking static target lib/librte_cfgfile.a 00:01:59.562 [220/740] Generating lib/rte_compressdev_def with a custom command 00:01:59.562 [221/740] Generating lib/rte_compressdev_mingw with a custom command 00:01:59.562 [222/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:59.562 [223/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:59.562 [224/740] Linking static target lib/librte_bpf.a 00:01:59.821 [225/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.821 [226/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:59.821 [227/740] Generating lib/rte_cryptodev_def with a custom command 00:01:59.821 [228/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:59.821 [229/740] Generating lib/rte_cryptodev_mingw with a custom command 00:01:59.821 [230/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:59.821 [231/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.821 [232/740] Linking static target lib/librte_compressdev.a 00:02:00.079 [233/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:00.080 [234/740] Generating lib/rte_distributor_def with a custom command 00:02:00.080 [235/740] Generating lib/rte_distributor_mingw with a custom command 00:02:00.080 [236/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:00.080 [237/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:00.080 [238/740] Generating lib/rte_efd_def with a custom command 00:02:00.080 [239/740] Linking static target lib/librte_acl.a 00:02:00.080 [240/740] Generating lib/rte_efd_mingw with a custom command 00:02:00.338 [241/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:00.338 [242/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:00.339 [243/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.339 [244/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:00.339 [245/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.598 [246/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:00.598 [247/740] Linking static target lib/librte_distributor.a 00:02:00.598 [248/740] Linking target lib/librte_eal.so.23.0 00:02:00.598 [249/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:00.598 [250/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.598 [251/740] Linking target lib/librte_ring.so.23.0 00:02:00.598 [252/740] Linking target lib/librte_meter.so.23.0 00:02:00.598 [253/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.598 [254/740] Linking target lib/librte_pci.so.23.0 00:02:00.857 [255/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:00.857 [256/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:00.857 [257/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:00.857 [258/740] Linking target lib/librte_timer.so.23.0 00:02:00.857 [259/740] Linking target lib/librte_acl.so.23.0 00:02:00.857 [260/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:00.857 [261/740] Linking target lib/librte_rcu.so.23.0 00:02:00.857 [262/740] Linking target lib/librte_mempool.so.23.0 00:02:00.857 [263/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:00.857 [264/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:00.857 [265/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:00.857 [266/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:00.857 [267/740] Linking target lib/librte_cfgfile.so.23.0 00:02:00.857 [268/740] Linking target lib/librte_mbuf.so.23.0 00:02:01.116 [269/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:01.116 [270/740] Linking target lib/librte_net.so.23.0 00:02:01.116 [271/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:01.116 [272/740] Linking target lib/librte_bbdev.so.23.0 00:02:01.116 [273/740] Linking target lib/librte_compressdev.so.23.0 00:02:01.116 [274/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:01.116 [275/740] Linking target lib/librte_distributor.so.23.0 00:02:01.116 [276/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:01.116 [277/740] Linking target lib/librte_cmdline.so.23.0 00:02:01.116 [278/740] Linking target lib/librte_hash.so.23.0 00:02:01.116 [279/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:01.116 [280/740] Linking static target lib/librte_efd.a 00:02:01.116 [281/740] Generating lib/rte_eventdev_def with a custom command 00:02:01.116 [282/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:01.375 [283/740] Generating lib/rte_gpudev_def with a custom command 00:02:01.375 [284/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:01.375 [285/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:01.375 [286/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:01.375 [287/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.375 [288/740] Linking target lib/librte_efd.so.23.0 00:02:01.634 [289/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:01.634 [290/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.634 [291/740] Linking static target lib/librte_cryptodev.a 00:02:01.634 [292/740] Linking target lib/librte_ethdev.so.23.0 00:02:01.634 [293/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:01.634 [294/740] Linking target lib/librte_metrics.so.23.0 00:02:01.634 [295/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:01.892 [296/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:01.892 [297/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:01.892 [298/740] Linking target lib/librte_bpf.so.23.0 00:02:01.892 [299/740] Linking static target lib/librte_gpudev.a 00:02:01.893 [300/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:01.893 [301/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:01.893 [302/740] Generating lib/rte_gro_def with a custom command 00:02:01.893 [303/740] Linking target lib/librte_bitratestats.so.23.0 00:02:01.893 [304/740] Generating lib/rte_gro_mingw with a custom command 00:02:01.893 [305/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:01.893 [306/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:01.893 [307/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:01.893 [308/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:02.151 [309/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:02.151 [310/740] Generating lib/rte_gso_def with a custom command 00:02:02.409 [311/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:02.409 [312/740] Generating lib/rte_gso_mingw with a custom command 00:02:02.409 [313/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:02.409 [314/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:02.409 [315/740] Linking static target lib/librte_gro.a 00:02:02.409 [316/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:02.409 [317/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:02.409 [318/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.409 [319/740] Linking target lib/librte_gpudev.so.23.0 00:02:02.409 [320/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.409 [321/740] Linking target lib/librte_gro.so.23.0 00:02:02.666 [322/740] Generating lib/rte_ip_frag_def with a custom command 00:02:02.666 [323/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:02.666 [324/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:02.666 [325/740] Linking static target lib/librte_gso.a 00:02:02.666 [326/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:02.666 [327/740] Linking static target lib/librte_eventdev.a 00:02:02.666 [328/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:02.666 [329/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:02.666 [330/740] Generating lib/rte_jobstats_def with a custom command 00:02:02.666 [331/740] Linking static target lib/librte_jobstats.a 00:02:02.666 [332/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:02.666 [333/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:02.666 [334/740] Generating lib/rte_latencystats_def with a custom command 00:02:02.666 [335/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:02.924 [336/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.924 [337/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:02.924 [338/740] Linking target lib/librte_gso.so.23.0 00:02:02.924 [339/740] Generating lib/rte_lpm_def with a custom command 00:02:02.924 [340/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:02.924 [341/740] Generating lib/rte_lpm_mingw with a custom command 00:02:02.924 [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:02.924 [343/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.924 [344/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:02.924 [345/740] Linking static target lib/librte_ip_frag.a 00:02:02.924 [346/740] Linking target lib/librte_jobstats.so.23.0 00:02:03.182 [347/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:03.182 [348/740] Linking static target lib/librte_latencystats.a 00:02:03.182 [349/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.182 [350/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:03.440 [351/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:03.440 [352/740] Linking target lib/librte_ip_frag.so.23.0 00:02:03.440 [353/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:03.440 [354/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:03.440 [355/740] Generating lib/rte_member_def with a custom command 00:02:03.440 [356/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.440 [357/740] Generating lib/rte_member_mingw with a custom command 00:02:03.440 [358/740] Generating lib/rte_pcapng_def with a custom command 00:02:03.440 [359/740] Linking target lib/librte_cryptodev.so.23.0 00:02:03.440 [360/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:03.440 [361/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.440 [362/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:03.440 [363/740] Linking target lib/librte_latencystats.so.23.0 00:02:03.440 [364/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:03.440 [365/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:03.440 [366/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:03.440 [367/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:03.697 [368/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:03.697 [369/740] Linking static target lib/librte_lpm.a 00:02:03.697 [370/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:03.955 [371/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:03.955 [372/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:03.955 [373/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:03.955 [374/740] Generating lib/rte_power_def with a custom command 00:02:03.955 [375/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:03.955 [376/740] Generating lib/rte_power_mingw with a custom command 00:02:03.955 [377/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.955 [378/740] Generating lib/rte_rawdev_def with a custom command 00:02:03.955 [379/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:03.955 [380/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:03.955 [381/740] Linking target lib/librte_lpm.so.23.0 00:02:03.955 [382/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:03.955 [383/740] Generating lib/rte_regexdev_def with a custom command 00:02:03.955 [384/740] Linking static target lib/librte_pcapng.a 00:02:03.955 [385/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:03.955 [386/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:04.212 [387/740] Generating lib/rte_dmadev_def with a custom command 00:02:04.212 [388/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:04.212 [389/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:04.212 [390/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:04.212 [391/740] Linking static target lib/librte_rawdev.a 00:02:04.212 [392/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.212 [393/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:04.212 [394/740] Linking target lib/librte_pcapng.so.23.0 00:02:04.212 [395/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.212 [396/740] Generating lib/rte_rib_def with a custom command 00:02:04.212 [397/740] Generating lib/rte_rib_mingw with a custom command 00:02:04.212 [398/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:04.212 [399/740] Linking static target lib/librte_dmadev.a 00:02:04.212 [400/740] Linking target lib/librte_eventdev.so.23.0 00:02:04.471 [401/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:04.471 [402/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:04.471 [403/740] Linking static target lib/librte_power.a 00:02:04.471 [404/740] Generating lib/rte_reorder_def with a custom command 00:02:04.471 [405/740] Generating lib/rte_reorder_mingw with a custom command 00:02:04.471 [406/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:04.471 [407/740] Linking static target lib/librte_regexdev.a 00:02:04.471 [408/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:04.471 [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:04.471 [410/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.471 [411/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:04.729 [412/740] Linking target lib/librte_rawdev.so.23.0 00:02:04.729 [413/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:04.729 [414/740] Linking static target lib/librte_member.a 00:02:04.729 [415/740] Generating lib/rte_sched_def with a custom command 00:02:04.729 [416/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:04.729 [417/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:04.729 [418/740] Generating lib/rte_sched_mingw with a custom command 00:02:04.729 [419/740] Generating lib/rte_security_def with a custom command 00:02:04.729 [420/740] Generating lib/rte_security_mingw with a custom command 00:02:04.729 [421/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:04.729 [422/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.729 [423/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:04.729 [424/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:04.729 [425/740] Linking static target lib/librte_reorder.a 00:02:04.729 [426/740] Linking target lib/librte_dmadev.so.23.0 00:02:04.729 [427/740] Generating lib/rte_stack_def with a custom command 00:02:04.729 [428/740] Generating lib/rte_stack_mingw with a custom command 00:02:04.729 [429/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:04.729 [430/740] Linking static target lib/librte_stack.a 00:02:04.987 [431/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:04.987 [432/740] Linking static target lib/librte_rib.a 00:02:04.987 [433/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:04.987 [434/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.987 [435/740] Linking target lib/librte_member.so.23.0 00:02:04.987 [436/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:04.987 [437/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.987 [438/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.987 [439/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.987 [440/740] Linking target lib/librte_reorder.so.23.0 00:02:04.987 [441/740] Linking target lib/librte_regexdev.so.23.0 00:02:04.987 [442/740] Linking target lib/librte_stack.so.23.0 00:02:05.245 [443/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.245 [444/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:05.245 [445/740] Linking static target lib/librte_security.a 00:02:05.245 [446/740] Linking target lib/librte_power.so.23.0 00:02:05.245 [447/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.245 [448/740] Linking target lib/librte_rib.so.23.0 00:02:05.245 [449/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:05.245 [450/740] Generating lib/rte_vhost_def with a custom command 00:02:05.503 [451/740] Generating lib/rte_vhost_mingw with a custom command 00:02:05.503 [452/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:05.503 [453/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:05.503 [454/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:05.503 [455/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.503 [456/740] Linking target lib/librte_security.so.23.0 00:02:05.761 [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:05.761 [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:05.761 [459/740] Linking static target lib/librte_sched.a 00:02:05.761 [460/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:06.018 [461/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:06.018 [462/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.018 [463/740] Generating lib/rte_ipsec_def with a custom command 00:02:06.018 [464/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:06.018 [465/740] Linking target lib/librte_sched.so.23.0 00:02:06.018 [466/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:06.280 [467/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:06.280 [468/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:06.280 [469/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:06.280 [470/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:06.280 [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:06.280 [472/740] Generating lib/rte_fib_def with a custom command 00:02:06.280 [473/740] Generating lib/rte_fib_mingw with a custom command 00:02:06.538 [474/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:06.538 [475/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:06.797 [476/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:06.797 [477/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:06.797 [478/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:07.055 [479/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:07.055 [480/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:07.055 [481/740] Linking static target lib/librte_fib.a 00:02:07.055 [482/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:07.055 [483/740] Linking static target lib/librte_ipsec.a 00:02:07.312 [484/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:07.312 [485/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:07.312 [486/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.312 [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:07.312 [488/740] Linking target lib/librte_fib.so.23.0 00:02:07.312 [489/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:07.312 [490/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.312 [491/740] Linking target lib/librte_ipsec.so.23.0 00:02:07.570 [492/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:07.829 [493/740] Generating lib/rte_port_def with a custom command 00:02:07.829 [494/740] Generating lib/rte_port_mingw with a custom command 00:02:07.829 [495/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:07.829 [496/740] Generating lib/rte_pdump_def with a custom command 00:02:07.829 [497/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:07.829 [498/740] Generating lib/rte_pdump_mingw with a custom command 00:02:07.829 [499/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:08.088 [500/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:08.088 [501/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:08.088 [502/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:08.088 [503/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:08.088 [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:08.088 [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:08.345 [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:08.346 [507/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:08.346 [508/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:08.604 [509/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:08.604 [510/740] Linking static target lib/librte_port.a 00:02:08.604 [511/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:08.604 [512/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:08.604 [513/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:08.604 [514/740] Linking static target lib/librte_pdump.a 00:02:08.862 [515/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.862 [516/740] Linking target lib/librte_pdump.so.23.0 00:02:08.862 [517/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.862 [518/740] Linking target lib/librte_port.so.23.0 00:02:08.862 [519/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:09.120 [520/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:09.120 [521/740] Generating lib/rte_table_def with a custom command 00:02:09.120 [522/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:09.120 [523/740] Generating lib/rte_table_mingw with a custom command 00:02:09.120 [524/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:09.379 [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:09.379 [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:09.379 [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:09.379 [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:09.379 [529/740] Generating lib/rte_pipeline_def with a custom command 00:02:09.379 [530/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:09.379 [531/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:09.379 [532/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:09.379 [533/740] Linking static target lib/librte_table.a 00:02:09.637 [534/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:09.895 [535/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:09.895 [536/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.895 [537/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:10.153 [538/740] Linking target lib/librte_table.so.23.0 00:02:10.153 [539/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:10.153 [540/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:10.153 [541/740] Generating lib/rte_graph_def with a custom command 00:02:10.153 [542/740] Generating lib/rte_graph_mingw with a custom command 00:02:10.153 [543/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:10.421 [544/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:10.421 [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:10.421 [546/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:10.421 [547/740] Linking static target lib/librte_graph.a 00:02:10.693 [548/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:10.693 [549/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:10.693 [550/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:10.693 [551/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:10.951 [552/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:10.951 [553/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:10.951 [554/740] Generating lib/rte_node_def with a custom command 00:02:10.951 [555/740] Generating lib/rte_node_mingw with a custom command 00:02:10.951 [556/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:11.210 [557/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:11.210 [558/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.210 [559/740] Linking target lib/librte_graph.so.23.0 00:02:11.210 [560/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:11.210 [561/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:11.210 [562/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:11.210 [563/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:11.210 [564/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:11.210 [565/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:11.210 [566/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:11.468 [567/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:11.468 [568/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:11.468 [569/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:11.468 [570/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:11.468 [571/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:11.468 [572/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:11.468 [573/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:11.468 [574/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:11.468 [575/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:11.468 [576/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:11.725 [577/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:11.725 [578/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:11.725 [579/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:11.725 [580/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.725 [581/740] Linking static target drivers/librte_bus_vdev.a 00:02:11.725 [582/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:11.725 [583/740] Linking static target lib/librte_node.a 00:02:11.983 [584/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:11.983 [585/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.983 [586/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.983 [587/740] Linking static target drivers/librte_bus_pci.a 00:02:11.983 [588/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:11.983 [589/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:11.983 [590/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.983 [591/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:11.983 [592/740] Linking target lib/librte_node.so.23.0 00:02:11.983 [593/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:11.983 [594/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:11.983 [595/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:12.241 [596/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.241 [597/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:12.241 [598/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:12.241 [599/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:12.241 [600/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:12.241 [601/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:12.498 [602/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:12.498 [603/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.498 [604/740] Linking static target drivers/librte_mempool_ring.a 00:02:12.498 [605/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:12.498 [606/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:12.498 [607/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:12.756 [608/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:13.014 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:13.014 [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:13.014 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:13.580 [612/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:13.580 [613/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:13.838 [614/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:13.838 [615/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:13.838 [616/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:14.096 [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:14.096 [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:14.354 [619/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:14.354 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:14.354 [621/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:14.922 [622/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:14.922 [623/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:15.180 [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:15.180 [625/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:15.180 [626/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:15.180 [627/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:15.439 [628/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:15.439 [629/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:15.439 [630/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:15.439 [631/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:15.697 [632/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:15.697 [633/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:15.955 [634/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:15.955 [635/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:16.214 [636/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:16.214 [637/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:16.214 [638/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:16.472 [639/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:16.472 [640/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:16.472 [641/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:16.472 [642/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:16.472 [643/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:16.730 [644/740] Linking static target drivers/librte_net_i40e.a 00:02:16.730 [645/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:16.730 [646/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:16.730 [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:16.730 [648/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:16.988 [649/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:16.988 [650/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:17.245 [651/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.245 [652/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:17.245 [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:17.245 [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:17.504 [655/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:17.504 [656/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:17.504 [657/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:17.504 [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:17.504 [659/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:17.761 [660/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:17.761 [661/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:17.761 [662/740] Linking static target lib/librte_vhost.a 00:02:17.761 [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:17.761 [664/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:17.761 [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:18.019 [666/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:18.019 [667/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:18.277 [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:18.535 [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:18.535 [670/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.535 [671/740] Linking target lib/librte_vhost.so.23.0 00:02:18.793 [672/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:18.793 [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:18.793 [674/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:19.052 [675/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:19.052 [676/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:19.052 [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:19.052 [678/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:19.052 [679/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:19.310 [680/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:19.310 [681/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:19.310 [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:19.576 [683/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:19.576 [684/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:19.576 [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:19.576 [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:19.847 [687/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:19.847 [688/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:19.847 [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:19.847 [690/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:20.105 [691/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:20.105 [692/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:20.105 [693/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:20.105 [694/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:20.363 [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:20.621 [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:20.621 [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:20.880 [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:20.880 [699/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:20.880 [700/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:21.138 [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:21.396 [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:21.396 [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:21.654 [704/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:21.654 [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:21.654 [706/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:21.654 [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:21.912 [708/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:22.171 [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:22.429 [710/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:22.429 [711/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:22.429 [712/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:22.688 [713/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:22.688 [714/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:22.688 [715/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:22.688 [716/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:22.688 [717/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:22.948 [718/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:23.208 [719/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:24.592 [720/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:24.592 [721/740] Linking static target lib/librte_pipeline.a 00:02:24.851 [722/740] Linking target app/dpdk-dumpcap 00:02:24.851 [723/740] Linking target app/dpdk-pdump 00:02:24.851 [724/740] Linking target app/dpdk-proc-info 00:02:24.851 [725/740] Linking target app/dpdk-test-compress-perf 00:02:24.851 [726/740] Linking target app/dpdk-test-acl 00:02:24.851 [727/740] Linking target app/dpdk-test-crypto-perf 00:02:24.851 [728/740] Linking target app/dpdk-test-bbdev 00:02:24.851 [729/740] Linking target app/dpdk-test-cmdline 00:02:24.851 [730/740] Linking target app/dpdk-test-eventdev 00:02:25.111 [731/740] Linking target app/dpdk-test-fib 00:02:25.111 [732/740] Linking target app/dpdk-test-gpudev 00:02:25.111 [733/740] Linking target app/dpdk-test-flow-perf 00:02:25.111 [734/740] Linking target app/dpdk-test-pipeline 00:02:25.111 [735/740] Linking target app/dpdk-test-security-perf 00:02:25.111 [736/740] Linking target app/dpdk-test-sad 00:02:25.111 [737/740] Linking target app/dpdk-test-regex 00:02:25.111 [738/740] Linking target app/dpdk-testpmd 00:02:30.422 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.422 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:30.422 21:34:52 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:30.422 21:34:52 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:30.422 21:34:52 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:30.422 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:30.422 [0/1] Installing files. 00:02:30.422 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.422 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.423 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.424 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.425 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.426 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.426 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.426 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.426 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.426 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.426 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.426 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.426 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:30.426 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:30.426 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:30.426 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:30.426 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.426 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.427 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.427 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:30.427 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.427 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:30.427 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.427 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:30.427 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.427 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:02:30.427 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.427 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.428 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.429 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.430 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.430 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.430 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.430 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.430 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:30.430 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:30.430 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:30.430 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:30.430 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:02:30.430 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:30.430 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:02:30.430 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:30.430 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:02:30.430 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:30.430 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:02:30.430 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:30.430 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:02:30.430 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:30.430 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:02:30.430 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:30.430 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:02:30.430 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:30.430 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:02:30.430 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:30.430 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:02:30.430 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:30.430 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:02:30.430 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:30.430 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:02:30.430 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:30.430 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:02:30.430 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:30.430 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:02:30.430 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:30.430 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:02:30.430 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:30.430 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:02:30.430 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:30.430 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:02:30.430 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:30.430 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:02:30.430 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:30.430 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:02:30.430 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:30.430 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:02:30.430 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:30.430 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:02:30.430 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:30.430 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:02:30.430 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:30.430 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:02:30.430 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:30.430 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:02:30.430 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:30.430 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:02:30.430 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:30.430 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:02:30.430 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:30.430 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:02:30.430 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:30.430 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:30.430 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:30.430 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:30.430 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:30.430 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:30.430 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:30.430 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:30.430 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:30.430 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:30.430 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:30.430 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:30.430 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:30.430 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:02:30.430 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:30.430 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:02:30.430 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:30.430 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:02:30.430 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:30.430 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:02:30.430 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:30.430 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:02:30.430 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:30.430 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:02:30.430 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:30.430 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:02:30.430 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:30.430 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:02:30.430 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:30.430 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:02:30.430 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:30.430 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:02:30.430 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:30.430 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:02:30.430 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:30.430 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:02:30.430 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:30.430 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:02:30.431 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:30.431 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:02:30.431 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:30.431 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:02:30.431 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:30.431 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:02:30.431 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:30.431 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:02:30.431 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:30.431 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:02:30.431 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:30.431 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:02:30.431 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:30.431 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:02:30.431 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:30.431 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:02:30.431 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:30.431 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:02:30.431 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:30.431 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:02:30.431 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:30.431 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:02:30.431 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:30.431 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:02:30.431 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:30.431 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:02:30.431 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:30.431 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:30.431 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:30.431 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:30.431 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:30.431 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:30.431 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:30.431 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:30.431 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:30.431 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:30.691 21:34:53 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:30.691 ************************************ 00:02:30.691 END TEST build_native_dpdk 00:02:30.691 ************************************ 00:02:30.691 21:34:53 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:30.691 00:02:30.691 real 0m44.448s 00:02:30.691 user 4m18.695s 00:02:30.691 sys 0m49.521s 00:02:30.691 21:34:53 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:30.691 21:34:53 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:30.691 21:34:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:30.691 21:34:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:30.691 21:34:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:30.691 21:34:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:30.691 21:34:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:30.691 21:34:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:30.691 21:34:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:30.691 21:34:53 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:02:30.691 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:02:30.952 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:02:30.952 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:02:30.952 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:31.523 Using 'verbs' RDMA provider 00:02:47.841 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:02.753 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:02.754 Creating mk/config.mk...done. 00:03:02.754 Creating mk/cc.flags.mk...done. 00:03:02.754 Type 'make' to build. 00:03:02.754 21:35:25 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:02.754 21:35:25 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:02.754 21:35:25 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:02.754 21:35:25 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.754 ************************************ 00:03:02.754 START TEST make 00:03:02.754 ************************************ 00:03:02.754 21:35:25 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:03.012 make[1]: Nothing to be done for 'all'. 00:03:49.707 CC lib/log/log.o 00:03:49.707 CC lib/log/log_flags.o 00:03:49.707 CC lib/log/log_deprecated.o 00:03:49.707 CC lib/ut/ut.o 00:03:49.707 CC lib/ut_mock/mock.o 00:03:49.707 LIB libspdk_log.a 00:03:49.707 LIB libspdk_ut_mock.a 00:03:49.707 LIB libspdk_ut.a 00:03:49.707 SO libspdk_log.so.7.1 00:03:49.707 SO libspdk_ut_mock.so.6.0 00:03:49.707 SO libspdk_ut.so.2.0 00:03:49.707 SYMLINK libspdk_log.so 00:03:49.707 SYMLINK libspdk_ut_mock.so 00:03:49.707 SYMLINK libspdk_ut.so 00:03:49.707 CXX lib/trace_parser/trace.o 00:03:49.707 CC lib/util/base64.o 00:03:49.707 CC lib/util/cpuset.o 00:03:49.707 CC lib/util/bit_array.o 00:03:49.707 CC lib/util/crc16.o 00:03:49.707 CC lib/util/crc32.o 00:03:49.707 CC lib/util/crc32c.o 00:03:49.707 CC lib/dma/dma.o 00:03:49.707 CC lib/ioat/ioat.o 00:03:49.707 CC lib/vfio_user/host/vfio_user_pci.o 00:03:49.707 CC lib/vfio_user/host/vfio_user.o 00:03:49.707 CC lib/util/crc32_ieee.o 00:03:49.707 CC lib/util/crc64.o 00:03:49.707 CC lib/util/dif.o 00:03:49.707 CC lib/util/fd.o 00:03:49.707 CC lib/util/fd_group.o 00:03:49.707 CC lib/util/file.o 00:03:49.707 CC lib/util/hexlify.o 00:03:49.707 LIB libspdk_dma.a 00:03:49.707 SO libspdk_dma.so.5.0 00:03:49.707 CC lib/util/iov.o 00:03:49.707 SYMLINK libspdk_dma.so 00:03:49.707 CC lib/util/math.o 00:03:49.707 CC lib/util/net.o 00:03:49.707 LIB libspdk_ioat.a 00:03:49.707 SO libspdk_ioat.so.7.0 00:03:49.707 LIB libspdk_vfio_user.a 00:03:49.707 CC lib/util/pipe.o 00:03:49.707 CC lib/util/strerror_tls.o 00:03:49.707 SO libspdk_vfio_user.so.5.0 00:03:49.707 SYMLINK libspdk_ioat.so 00:03:49.707 CC lib/util/string.o 00:03:49.707 CC lib/util/uuid.o 00:03:49.707 SYMLINK libspdk_vfio_user.so 00:03:49.707 CC lib/util/xor.o 00:03:49.707 CC lib/util/zipf.o 00:03:49.707 CC lib/util/md5.o 00:03:49.707 LIB libspdk_util.a 00:03:49.707 LIB libspdk_trace_parser.a 00:03:49.707 SO libspdk_trace_parser.so.6.0 00:03:49.707 SO libspdk_util.so.10.1 00:03:49.707 SYMLINK libspdk_trace_parser.so 00:03:49.707 SYMLINK libspdk_util.so 00:03:49.707 CC lib/vmd/vmd.o 00:03:49.707 CC lib/vmd/led.o 00:03:49.707 CC lib/conf/conf.o 00:03:49.707 CC lib/json/json_parse.o 00:03:49.707 CC lib/json/json_write.o 00:03:49.707 CC lib/json/json_util.o 00:03:49.707 CC lib/rdma_utils/rdma_utils.o 00:03:49.707 CC lib/idxd/idxd.o 00:03:49.707 CC lib/idxd/idxd_user.o 00:03:49.707 CC lib/env_dpdk/env.o 00:03:49.707 CC lib/env_dpdk/memory.o 00:03:49.707 CC lib/idxd/idxd_kernel.o 00:03:49.707 LIB libspdk_conf.a 00:03:49.707 CC lib/env_dpdk/pci.o 00:03:49.707 LIB libspdk_rdma_utils.a 00:03:49.707 SO libspdk_conf.so.6.0 00:03:49.707 CC lib/env_dpdk/init.o 00:03:49.707 SO libspdk_rdma_utils.so.1.0 00:03:49.707 LIB libspdk_json.a 00:03:49.707 SYMLINK libspdk_conf.so 00:03:49.707 SYMLINK libspdk_rdma_utils.so 00:03:49.707 CC lib/env_dpdk/threads.o 00:03:49.707 CC lib/env_dpdk/pci_ioat.o 00:03:49.707 SO libspdk_json.so.6.0 00:03:49.707 CC lib/env_dpdk/pci_virtio.o 00:03:49.707 SYMLINK libspdk_json.so 00:03:49.707 CC lib/env_dpdk/pci_vmd.o 00:03:49.707 CC lib/env_dpdk/pci_idxd.o 00:03:49.707 CC lib/env_dpdk/pci_event.o 00:03:49.707 CC lib/env_dpdk/sigbus_handler.o 00:03:49.707 CC lib/rdma_provider/common.o 00:03:49.707 CC lib/env_dpdk/pci_dpdk.o 00:03:49.707 CC lib/jsonrpc/jsonrpc_server.o 00:03:49.707 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:49.707 CC lib/jsonrpc/jsonrpc_client.o 00:03:49.707 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:49.707 LIB libspdk_vmd.a 00:03:49.707 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:49.707 SO libspdk_vmd.so.6.0 00:03:49.707 LIB libspdk_idxd.a 00:03:49.707 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:49.707 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:49.707 SO libspdk_idxd.so.12.1 00:03:49.707 SYMLINK libspdk_vmd.so 00:03:49.707 SYMLINK libspdk_idxd.so 00:03:49.707 LIB libspdk_jsonrpc.a 00:03:49.707 LIB libspdk_rdma_provider.a 00:03:49.707 SO libspdk_jsonrpc.so.6.0 00:03:49.707 SO libspdk_rdma_provider.so.7.0 00:03:49.707 SYMLINK libspdk_jsonrpc.so 00:03:49.708 SYMLINK libspdk_rdma_provider.so 00:03:49.708 CC lib/rpc/rpc.o 00:03:49.708 LIB libspdk_env_dpdk.a 00:03:49.708 SO libspdk_env_dpdk.so.15.1 00:03:49.708 LIB libspdk_rpc.a 00:03:49.708 SO libspdk_rpc.so.6.0 00:03:49.708 SYMLINK libspdk_env_dpdk.so 00:03:49.708 SYMLINK libspdk_rpc.so 00:03:49.708 CC lib/keyring/keyring.o 00:03:49.708 CC lib/keyring/keyring_rpc.o 00:03:49.708 CC lib/trace/trace_flags.o 00:03:49.708 CC lib/trace/trace.o 00:03:49.708 CC lib/trace/trace_rpc.o 00:03:49.708 CC lib/notify/notify_rpc.o 00:03:49.708 CC lib/notify/notify.o 00:03:49.708 LIB libspdk_notify.a 00:03:49.708 SO libspdk_notify.so.6.0 00:03:49.708 LIB libspdk_keyring.a 00:03:49.708 SO libspdk_keyring.so.2.0 00:03:49.708 SYMLINK libspdk_notify.so 00:03:49.708 LIB libspdk_trace.a 00:03:49.708 SYMLINK libspdk_keyring.so 00:03:49.708 SO libspdk_trace.so.11.0 00:03:49.708 SYMLINK libspdk_trace.so 00:03:49.708 CC lib/thread/iobuf.o 00:03:49.708 CC lib/thread/thread.o 00:03:49.708 CC lib/sock/sock.o 00:03:49.708 CC lib/sock/sock_rpc.o 00:03:49.708 LIB libspdk_sock.a 00:03:49.708 SO libspdk_sock.so.10.0 00:03:49.708 SYMLINK libspdk_sock.so 00:03:49.708 CC lib/nvme/nvme_ctrlr.o 00:03:49.708 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:49.708 CC lib/nvme/nvme_fabric.o 00:03:49.708 CC lib/nvme/nvme_ns_cmd.o 00:03:49.708 CC lib/nvme/nvme_ns.o 00:03:49.708 CC lib/nvme/nvme_pcie_common.o 00:03:49.708 CC lib/nvme/nvme_pcie.o 00:03:49.708 CC lib/nvme/nvme.o 00:03:49.708 CC lib/nvme/nvme_qpair.o 00:03:49.708 LIB libspdk_thread.a 00:03:49.708 SO libspdk_thread.so.11.0 00:03:49.708 CC lib/nvme/nvme_quirks.o 00:03:49.708 CC lib/nvme/nvme_transport.o 00:03:49.708 SYMLINK libspdk_thread.so 00:03:49.708 CC lib/nvme/nvme_discovery.o 00:03:49.708 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:49.708 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:49.967 CC lib/nvme/nvme_tcp.o 00:03:49.967 CC lib/nvme/nvme_opal.o 00:03:49.967 CC lib/nvme/nvme_io_msg.o 00:03:50.225 CC lib/nvme/nvme_poll_group.o 00:03:50.225 CC lib/nvme/nvme_zns.o 00:03:50.225 CC lib/nvme/nvme_stubs.o 00:03:50.225 CC lib/nvme/nvme_auth.o 00:03:50.225 CC lib/nvme/nvme_cuse.o 00:03:50.484 CC lib/nvme/nvme_rdma.o 00:03:50.742 CC lib/accel/accel.o 00:03:50.742 CC lib/accel/accel_rpc.o 00:03:50.742 CC lib/blob/blobstore.o 00:03:51.000 CC lib/init/json_config.o 00:03:51.000 CC lib/virtio/virtio.o 00:03:51.000 CC lib/blob/request.o 00:03:51.304 CC lib/init/subsystem.o 00:03:51.304 CC lib/blob/zeroes.o 00:03:51.304 CC lib/init/subsystem_rpc.o 00:03:51.304 CC lib/virtio/virtio_vhost_user.o 00:03:51.304 CC lib/virtio/virtio_vfio_user.o 00:03:51.304 CC lib/virtio/virtio_pci.o 00:03:51.304 CC lib/blob/blob_bs_dev.o 00:03:51.304 CC lib/accel/accel_sw.o 00:03:51.304 CC lib/init/rpc.o 00:03:51.563 LIB libspdk_init.a 00:03:51.563 SO libspdk_init.so.6.0 00:03:51.563 CC lib/fsdev/fsdev.o 00:03:51.563 CC lib/fsdev/fsdev_rpc.o 00:03:51.563 CC lib/fsdev/fsdev_io.o 00:03:51.563 LIB libspdk_virtio.a 00:03:51.563 SYMLINK libspdk_init.so 00:03:51.563 SO libspdk_virtio.so.7.0 00:03:51.822 SYMLINK libspdk_virtio.so 00:03:51.822 LIB libspdk_nvme.a 00:03:51.822 CC lib/event/app.o 00:03:51.822 CC lib/event/reactor.o 00:03:51.822 CC lib/event/log_rpc.o 00:03:51.822 CC lib/event/app_rpc.o 00:03:51.822 CC lib/event/scheduler_static.o 00:03:51.822 LIB libspdk_accel.a 00:03:51.822 SO libspdk_accel.so.16.0 00:03:52.081 SYMLINK libspdk_accel.so 00:03:52.081 SO libspdk_nvme.so.15.0 00:03:52.340 SYMLINK libspdk_nvme.so 00:03:52.340 CC lib/bdev/bdev.o 00:03:52.340 CC lib/bdev/bdev_zone.o 00:03:52.340 CC lib/bdev/bdev_rpc.o 00:03:52.340 CC lib/bdev/part.o 00:03:52.340 CC lib/bdev/scsi_nvme.o 00:03:52.340 LIB libspdk_fsdev.a 00:03:52.340 SO libspdk_fsdev.so.2.0 00:03:52.340 LIB libspdk_event.a 00:03:52.340 SYMLINK libspdk_fsdev.so 00:03:52.340 SO libspdk_event.so.14.0 00:03:52.599 SYMLINK libspdk_event.so 00:03:52.599 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:53.538 LIB libspdk_fuse_dispatcher.a 00:03:53.538 SO libspdk_fuse_dispatcher.so.1.0 00:03:53.538 SYMLINK libspdk_fuse_dispatcher.so 00:03:54.477 LIB libspdk_blob.a 00:03:54.736 SO libspdk_blob.so.12.0 00:03:54.736 SYMLINK libspdk_blob.so 00:03:55.303 LIB libspdk_bdev.a 00:03:55.303 CC lib/blobfs/blobfs.o 00:03:55.303 CC lib/blobfs/tree.o 00:03:55.303 SO libspdk_bdev.so.17.0 00:03:55.303 CC lib/lvol/lvol.o 00:03:55.303 SYMLINK libspdk_bdev.so 00:03:55.562 CC lib/nvmf/ctrlr.o 00:03:55.562 CC lib/nbd/nbd.o 00:03:55.562 CC lib/nvmf/ctrlr_discovery.o 00:03:55.562 CC lib/scsi/dev.o 00:03:55.562 CC lib/scsi/lun.o 00:03:55.562 CC lib/nvmf/ctrlr_bdev.o 00:03:55.562 CC lib/ublk/ublk.o 00:03:55.562 CC lib/ftl/ftl_core.o 00:03:55.820 CC lib/ublk/ublk_rpc.o 00:03:55.820 CC lib/scsi/port.o 00:03:55.820 CC lib/nvmf/subsystem.o 00:03:56.079 CC lib/ftl/ftl_init.o 00:03:56.079 CC lib/nbd/nbd_rpc.o 00:03:56.079 CC lib/ftl/ftl_layout.o 00:03:56.079 CC lib/scsi/scsi.o 00:03:56.079 LIB libspdk_blobfs.a 00:03:56.079 SO libspdk_blobfs.so.11.0 00:03:56.079 CC lib/ftl/ftl_debug.o 00:03:56.079 LIB libspdk_nbd.a 00:03:56.079 SO libspdk_nbd.so.7.0 00:03:56.337 SYMLINK libspdk_blobfs.so 00:03:56.337 CC lib/nvmf/nvmf.o 00:03:56.337 CC lib/scsi/scsi_bdev.o 00:03:56.337 LIB libspdk_lvol.a 00:03:56.337 LIB libspdk_ublk.a 00:03:56.337 SYMLINK libspdk_nbd.so 00:03:56.337 CC lib/nvmf/nvmf_rpc.o 00:03:56.337 SO libspdk_lvol.so.11.0 00:03:56.337 SO libspdk_ublk.so.3.0 00:03:56.337 CC lib/nvmf/transport.o 00:03:56.337 SYMLINK libspdk_lvol.so 00:03:56.337 CC lib/nvmf/tcp.o 00:03:56.337 SYMLINK libspdk_ublk.so 00:03:56.337 CC lib/nvmf/stubs.o 00:03:56.337 CC lib/nvmf/mdns_server.o 00:03:56.337 CC lib/ftl/ftl_io.o 00:03:56.596 CC lib/ftl/ftl_sb.o 00:03:56.855 CC lib/scsi/scsi_pr.o 00:03:56.855 CC lib/nvmf/rdma.o 00:03:56.855 CC lib/nvmf/auth.o 00:03:56.855 CC lib/ftl/ftl_l2p.o 00:03:57.113 CC lib/scsi/scsi_rpc.o 00:03:57.114 CC lib/ftl/ftl_l2p_flat.o 00:03:57.114 CC lib/scsi/task.o 00:03:57.114 CC lib/ftl/ftl_nv_cache.o 00:03:57.114 CC lib/ftl/ftl_band.o 00:03:57.114 CC lib/ftl/ftl_band_ops.o 00:03:57.373 CC lib/ftl/ftl_writer.o 00:03:57.373 CC lib/ftl/ftl_rq.o 00:03:57.373 LIB libspdk_scsi.a 00:03:57.373 SO libspdk_scsi.so.9.0 00:03:57.632 SYMLINK libspdk_scsi.so 00:03:57.632 CC lib/ftl/ftl_reloc.o 00:03:57.632 CC lib/ftl/ftl_l2p_cache.o 00:03:57.632 CC lib/ftl/ftl_p2l.o 00:03:57.632 CC lib/ftl/ftl_p2l_log.o 00:03:57.632 CC lib/iscsi/conn.o 00:03:57.632 CC lib/iscsi/init_grp.o 00:03:57.632 CC lib/vhost/vhost.o 00:03:57.891 CC lib/ftl/mngt/ftl_mngt.o 00:03:57.891 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:57.891 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:57.891 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:58.150 CC lib/iscsi/iscsi.o 00:03:58.150 CC lib/iscsi/param.o 00:03:58.150 CC lib/iscsi/portal_grp.o 00:03:58.150 CC lib/iscsi/tgt_node.o 00:03:58.150 CC lib/iscsi/iscsi_subsystem.o 00:03:58.150 CC lib/iscsi/iscsi_rpc.o 00:03:58.150 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:58.408 CC lib/iscsi/task.o 00:03:58.408 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:58.408 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:58.408 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:58.667 CC lib/vhost/vhost_rpc.o 00:03:58.667 CC lib/vhost/vhost_scsi.o 00:03:58.667 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:58.667 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:58.667 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:58.667 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:58.667 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:58.667 CC lib/vhost/vhost_blk.o 00:03:58.926 CC lib/ftl/utils/ftl_conf.o 00:03:58.926 CC lib/vhost/rte_vhost_user.o 00:03:58.926 CC lib/ftl/utils/ftl_md.o 00:03:58.926 CC lib/ftl/utils/ftl_mempool.o 00:03:58.926 CC lib/ftl/utils/ftl_bitmap.o 00:03:59.185 CC lib/ftl/utils/ftl_property.o 00:03:59.185 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:59.185 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:59.185 LIB libspdk_nvmf.a 00:03:59.185 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:59.185 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:59.444 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:59.444 SO libspdk_nvmf.so.20.0 00:03:59.444 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:59.444 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:59.444 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:59.444 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:59.444 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:59.444 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:59.444 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:59.444 SYMLINK libspdk_nvmf.so 00:03:59.444 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:59.702 LIB libspdk_iscsi.a 00:03:59.702 CC lib/ftl/base/ftl_base_dev.o 00:03:59.702 CC lib/ftl/base/ftl_base_bdev.o 00:03:59.702 CC lib/ftl/ftl_trace.o 00:03:59.702 SO libspdk_iscsi.so.8.0 00:03:59.962 SYMLINK libspdk_iscsi.so 00:03:59.962 LIB libspdk_vhost.a 00:03:59.962 LIB libspdk_ftl.a 00:03:59.962 SO libspdk_vhost.so.8.0 00:03:59.962 SYMLINK libspdk_vhost.so 00:04:00.220 SO libspdk_ftl.so.9.0 00:04:00.481 SYMLINK libspdk_ftl.so 00:04:00.739 CC module/env_dpdk/env_dpdk_rpc.o 00:04:00.739 CC module/keyring/linux/keyring.o 00:04:00.739 CC module/blob/bdev/blob_bdev.o 00:04:00.739 CC module/accel/ioat/accel_ioat.o 00:04:00.739 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:00.739 CC module/fsdev/aio/fsdev_aio.o 00:04:00.739 CC module/keyring/file/keyring.o 00:04:00.739 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:00.739 CC module/accel/error/accel_error.o 00:04:00.739 CC module/sock/posix/posix.o 00:04:00.999 LIB libspdk_env_dpdk_rpc.a 00:04:00.999 SO libspdk_env_dpdk_rpc.so.6.0 00:04:00.999 CC module/keyring/file/keyring_rpc.o 00:04:00.999 LIB libspdk_scheduler_dpdk_governor.a 00:04:00.999 SYMLINK libspdk_env_dpdk_rpc.so 00:04:00.999 CC module/keyring/linux/keyring_rpc.o 00:04:00.999 CC module/accel/ioat/accel_ioat_rpc.o 00:04:00.999 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:00.999 LIB libspdk_scheduler_dynamic.a 00:04:00.999 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:00.999 CC module/accel/error/accel_error_rpc.o 00:04:00.999 SO libspdk_scheduler_dynamic.so.4.0 00:04:00.999 LIB libspdk_blob_bdev.a 00:04:00.999 LIB libspdk_keyring_file.a 00:04:00.999 LIB libspdk_accel_ioat.a 00:04:00.999 LIB libspdk_keyring_linux.a 00:04:01.258 SO libspdk_blob_bdev.so.12.0 00:04:01.258 SO libspdk_keyring_file.so.2.0 00:04:01.258 SO libspdk_accel_ioat.so.6.0 00:04:01.258 SYMLINK libspdk_scheduler_dynamic.so 00:04:01.258 SO libspdk_keyring_linux.so.1.0 00:04:01.258 CC module/scheduler/gscheduler/gscheduler.o 00:04:01.258 SYMLINK libspdk_keyring_file.so 00:04:01.258 SYMLINK libspdk_blob_bdev.so 00:04:01.258 SYMLINK libspdk_keyring_linux.so 00:04:01.258 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:01.258 CC module/fsdev/aio/linux_aio_mgr.o 00:04:01.258 LIB libspdk_accel_error.a 00:04:01.258 SYMLINK libspdk_accel_ioat.so 00:04:01.258 SO libspdk_accel_error.so.2.0 00:04:01.258 CC module/accel/dsa/accel_dsa.o 00:04:01.258 SYMLINK libspdk_accel_error.so 00:04:01.258 CC module/accel/dsa/accel_dsa_rpc.o 00:04:01.258 LIB libspdk_scheduler_gscheduler.a 00:04:01.258 CC module/accel/iaa/accel_iaa.o 00:04:01.258 SO libspdk_scheduler_gscheduler.so.4.0 00:04:01.517 CC module/bdev/delay/vbdev_delay.o 00:04:01.517 SYMLINK libspdk_scheduler_gscheduler.so 00:04:01.517 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:01.517 CC module/blobfs/bdev/blobfs_bdev.o 00:04:01.517 CC module/bdev/error/vbdev_error.o 00:04:01.517 CC module/bdev/gpt/gpt.o 00:04:01.517 LIB libspdk_accel_dsa.a 00:04:01.517 CC module/accel/iaa/accel_iaa_rpc.o 00:04:01.517 SO libspdk_accel_dsa.so.5.0 00:04:01.517 LIB libspdk_fsdev_aio.a 00:04:01.517 CC module/bdev/lvol/vbdev_lvol.o 00:04:01.517 SO libspdk_fsdev_aio.so.1.0 00:04:01.517 SYMLINK libspdk_accel_dsa.so 00:04:01.517 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:01.517 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:01.776 SYMLINK libspdk_fsdev_aio.so 00:04:01.776 LIB libspdk_sock_posix.a 00:04:01.776 CC module/bdev/gpt/vbdev_gpt.o 00:04:01.776 LIB libspdk_accel_iaa.a 00:04:01.776 SO libspdk_accel_iaa.so.3.0 00:04:01.776 SO libspdk_sock_posix.so.6.0 00:04:01.776 CC module/bdev/error/vbdev_error_rpc.o 00:04:01.776 SYMLINK libspdk_accel_iaa.so 00:04:01.776 CC module/bdev/malloc/bdev_malloc.o 00:04:01.776 SYMLINK libspdk_sock_posix.so 00:04:01.776 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:01.776 LIB libspdk_bdev_delay.a 00:04:01.776 LIB libspdk_blobfs_bdev.a 00:04:01.776 SO libspdk_blobfs_bdev.so.6.0 00:04:01.776 SO libspdk_bdev_delay.so.6.0 00:04:01.776 LIB libspdk_bdev_error.a 00:04:01.776 SYMLINK libspdk_blobfs_bdev.so 00:04:01.776 CC module/bdev/nvme/bdev_nvme.o 00:04:01.776 SYMLINK libspdk_bdev_delay.so 00:04:01.776 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:02.035 CC module/bdev/nvme/nvme_rpc.o 00:04:02.035 CC module/bdev/null/bdev_null.o 00:04:02.035 SO libspdk_bdev_error.so.6.0 00:04:02.035 CC module/bdev/nvme/bdev_mdns_client.o 00:04:02.035 LIB libspdk_bdev_gpt.a 00:04:02.035 SYMLINK libspdk_bdev_error.so 00:04:02.035 CC module/bdev/null/bdev_null_rpc.o 00:04:02.035 SO libspdk_bdev_gpt.so.6.0 00:04:02.035 CC module/bdev/nvme/vbdev_opal.o 00:04:02.035 SYMLINK libspdk_bdev_gpt.so 00:04:02.035 LIB libspdk_bdev_lvol.a 00:04:02.035 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:02.035 SO libspdk_bdev_lvol.so.6.0 00:04:02.035 LIB libspdk_bdev_malloc.a 00:04:02.294 SO libspdk_bdev_malloc.so.6.0 00:04:02.294 CC module/bdev/passthru/vbdev_passthru.o 00:04:02.294 SYMLINK libspdk_bdev_lvol.so 00:04:02.294 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:02.294 LIB libspdk_bdev_null.a 00:04:02.294 SYMLINK libspdk_bdev_malloc.so 00:04:02.294 SO libspdk_bdev_null.so.6.0 00:04:02.294 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:02.294 CC module/bdev/split/vbdev_split.o 00:04:02.294 CC module/bdev/raid/bdev_raid.o 00:04:02.294 SYMLINK libspdk_bdev_null.so 00:04:02.294 CC module/bdev/split/vbdev_split_rpc.o 00:04:02.294 CC module/bdev/raid/bdev_raid_rpc.o 00:04:02.553 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:02.553 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:02.553 CC module/bdev/aio/bdev_aio.o 00:04:02.553 LIB libspdk_bdev_split.a 00:04:02.553 LIB libspdk_bdev_passthru.a 00:04:02.553 SO libspdk_bdev_split.so.6.0 00:04:02.553 SO libspdk_bdev_passthru.so.6.0 00:04:02.553 SYMLINK libspdk_bdev_split.so 00:04:02.553 CC module/bdev/raid/bdev_raid_sb.o 00:04:02.553 CC module/bdev/raid/raid0.o 00:04:02.553 SYMLINK libspdk_bdev_passthru.so 00:04:02.553 CC module/bdev/raid/raid1.o 00:04:02.812 CC module/bdev/ftl/bdev_ftl.o 00:04:02.812 CC module/bdev/iscsi/bdev_iscsi.o 00:04:02.812 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:02.812 LIB libspdk_bdev_zone_block.a 00:04:02.812 SO libspdk_bdev_zone_block.so.6.0 00:04:02.812 CC module/bdev/aio/bdev_aio_rpc.o 00:04:02.812 SYMLINK libspdk_bdev_zone_block.so 00:04:02.812 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:02.812 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:02.812 CC module/bdev/raid/concat.o 00:04:02.812 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:03.071 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:03.071 LIB libspdk_bdev_aio.a 00:04:03.071 SO libspdk_bdev_aio.so.6.0 00:04:03.071 LIB libspdk_bdev_ftl.a 00:04:03.071 SYMLINK libspdk_bdev_aio.so 00:04:03.071 CC module/bdev/raid/raid5f.o 00:04:03.071 SO libspdk_bdev_ftl.so.6.0 00:04:03.071 LIB libspdk_bdev_iscsi.a 00:04:03.071 SYMLINK libspdk_bdev_ftl.so 00:04:03.071 SO libspdk_bdev_iscsi.so.6.0 00:04:03.330 SYMLINK libspdk_bdev_iscsi.so 00:04:03.330 LIB libspdk_bdev_virtio.a 00:04:03.589 SO libspdk_bdev_virtio.so.6.0 00:04:03.589 SYMLINK libspdk_bdev_virtio.so 00:04:03.589 LIB libspdk_bdev_raid.a 00:04:03.589 SO libspdk_bdev_raid.so.6.0 00:04:03.848 SYMLINK libspdk_bdev_raid.so 00:04:04.787 LIB libspdk_bdev_nvme.a 00:04:04.787 SO libspdk_bdev_nvme.so.7.1 00:04:04.787 SYMLINK libspdk_bdev_nvme.so 00:04:05.355 CC module/event/subsystems/sock/sock.o 00:04:05.355 CC module/event/subsystems/scheduler/scheduler.o 00:04:05.355 CC module/event/subsystems/keyring/keyring.o 00:04:05.355 CC module/event/subsystems/vmd/vmd.o 00:04:05.355 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:05.355 CC module/event/subsystems/fsdev/fsdev.o 00:04:05.355 CC module/event/subsystems/iobuf/iobuf.o 00:04:05.355 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:05.355 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:05.615 LIB libspdk_event_fsdev.a 00:04:05.615 LIB libspdk_event_sock.a 00:04:05.615 LIB libspdk_event_vmd.a 00:04:05.615 LIB libspdk_event_keyring.a 00:04:05.615 LIB libspdk_event_scheduler.a 00:04:05.615 SO libspdk_event_fsdev.so.1.0 00:04:05.615 LIB libspdk_event_vhost_blk.a 00:04:05.615 SO libspdk_event_sock.so.5.0 00:04:05.615 SO libspdk_event_vmd.so.6.0 00:04:05.615 LIB libspdk_event_iobuf.a 00:04:05.615 SO libspdk_event_keyring.so.1.0 00:04:05.615 SO libspdk_event_vhost_blk.so.3.0 00:04:05.615 SO libspdk_event_scheduler.so.4.0 00:04:05.615 SO libspdk_event_iobuf.so.3.0 00:04:05.615 SYMLINK libspdk_event_sock.so 00:04:05.615 SYMLINK libspdk_event_fsdev.so 00:04:05.615 SYMLINK libspdk_event_vmd.so 00:04:05.615 SYMLINK libspdk_event_keyring.so 00:04:05.615 SYMLINK libspdk_event_vhost_blk.so 00:04:05.615 SYMLINK libspdk_event_scheduler.so 00:04:05.615 SYMLINK libspdk_event_iobuf.so 00:04:06.186 CC module/event/subsystems/accel/accel.o 00:04:06.186 LIB libspdk_event_accel.a 00:04:06.186 SO libspdk_event_accel.so.6.0 00:04:06.445 SYMLINK libspdk_event_accel.so 00:04:06.703 CC module/event/subsystems/bdev/bdev.o 00:04:06.962 LIB libspdk_event_bdev.a 00:04:06.962 SO libspdk_event_bdev.so.6.0 00:04:06.962 SYMLINK libspdk_event_bdev.so 00:04:07.220 CC module/event/subsystems/nbd/nbd.o 00:04:07.220 CC module/event/subsystems/scsi/scsi.o 00:04:07.220 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:07.221 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:07.221 CC module/event/subsystems/ublk/ublk.o 00:04:07.478 LIB libspdk_event_scsi.a 00:04:07.478 LIB libspdk_event_nbd.a 00:04:07.478 SO libspdk_event_scsi.so.6.0 00:04:07.478 SO libspdk_event_nbd.so.6.0 00:04:07.478 LIB libspdk_event_ublk.a 00:04:07.478 SYMLINK libspdk_event_scsi.so 00:04:07.478 SO libspdk_event_ublk.so.3.0 00:04:07.478 SYMLINK libspdk_event_nbd.so 00:04:07.478 LIB libspdk_event_nvmf.a 00:04:07.478 SO libspdk_event_nvmf.so.6.0 00:04:07.478 SYMLINK libspdk_event_ublk.so 00:04:07.737 SYMLINK libspdk_event_nvmf.so 00:04:07.737 CC module/event/subsystems/iscsi/iscsi.o 00:04:07.737 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:07.995 LIB libspdk_event_vhost_scsi.a 00:04:07.995 LIB libspdk_event_iscsi.a 00:04:07.995 SO libspdk_event_iscsi.so.6.0 00:04:07.995 SO libspdk_event_vhost_scsi.so.3.0 00:04:07.995 SYMLINK libspdk_event_iscsi.so 00:04:08.253 SYMLINK libspdk_event_vhost_scsi.so 00:04:08.253 SO libspdk.so.6.0 00:04:08.253 SYMLINK libspdk.so 00:04:08.819 CC app/spdk_lspci/spdk_lspci.o 00:04:08.819 CC app/trace_record/trace_record.o 00:04:08.819 CXX app/trace/trace.o 00:04:08.819 CC app/nvmf_tgt/nvmf_main.o 00:04:08.819 CC app/iscsi_tgt/iscsi_tgt.o 00:04:08.819 CC app/spdk_tgt/spdk_tgt.o 00:04:08.819 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:08.819 CC examples/util/zipf/zipf.o 00:04:08.819 CC examples/ioat/perf/perf.o 00:04:08.819 CC test/thread/poller_perf/poller_perf.o 00:04:08.819 LINK spdk_lspci 00:04:08.819 LINK nvmf_tgt 00:04:08.819 LINK poller_perf 00:04:08.819 LINK interrupt_tgt 00:04:08.819 LINK zipf 00:04:08.819 LINK iscsi_tgt 00:04:08.819 LINK spdk_tgt 00:04:08.819 LINK spdk_trace_record 00:04:09.077 LINK ioat_perf 00:04:09.077 CC app/spdk_nvme_perf/perf.o 00:04:09.077 LINK spdk_trace 00:04:09.077 CC app/spdk_nvme_identify/identify.o 00:04:09.077 CC app/spdk_top/spdk_top.o 00:04:09.077 CC app/spdk_nvme_discover/discovery_aer.o 00:04:09.336 CC test/dma/test_dma/test_dma.o 00:04:09.336 CC app/spdk_dd/spdk_dd.o 00:04:09.336 CC examples/ioat/verify/verify.o 00:04:09.336 CC app/fio/nvme/fio_plugin.o 00:04:09.336 CC examples/thread/thread/thread_ex.o 00:04:09.336 CC app/fio/bdev/fio_plugin.o 00:04:09.336 LINK spdk_nvme_discover 00:04:09.595 LINK verify 00:04:09.595 LINK thread 00:04:09.595 CC app/vhost/vhost.o 00:04:09.854 LINK spdk_dd 00:04:09.854 LINK test_dma 00:04:09.854 LINK vhost 00:04:09.854 CC examples/sock/hello_world/hello_sock.o 00:04:09.854 LINK spdk_bdev 00:04:10.114 LINK spdk_nvme 00:04:10.114 CC examples/vmd/lsvmd/lsvmd.o 00:04:10.114 LINK spdk_nvme_perf 00:04:10.114 CC examples/idxd/perf/perf.o 00:04:10.114 LINK lsvmd 00:04:10.114 LINK spdk_nvme_identify 00:04:10.374 LINK spdk_top 00:04:10.374 LINK hello_sock 00:04:10.374 CC examples/accel/perf/accel_perf.o 00:04:10.374 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:10.374 CC test/app/bdev_svc/bdev_svc.o 00:04:10.374 CC examples/vmd/led/led.o 00:04:10.374 CC examples/blob/hello_world/hello_blob.o 00:04:10.374 LINK led 00:04:10.374 CC test/app/histogram_perf/histogram_perf.o 00:04:10.374 CC test/app/jsoncat/jsoncat.o 00:04:10.374 LINK bdev_svc 00:04:10.633 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:10.633 LINK idxd_perf 00:04:10.633 LINK hello_fsdev 00:04:10.633 LINK hello_blob 00:04:10.633 CC examples/nvme/hello_world/hello_world.o 00:04:10.633 LINK histogram_perf 00:04:10.633 LINK jsoncat 00:04:10.633 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:10.892 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:10.892 LINK hello_world 00:04:10.892 LINK accel_perf 00:04:10.892 CC examples/nvme/reconnect/reconnect.o 00:04:10.892 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:10.892 CC test/app/stub/stub.o 00:04:10.892 CC test/blobfs/mkfs/mkfs.o 00:04:10.892 CC examples/blob/cli/blobcli.o 00:04:10.892 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:10.892 LINK nvme_fuzz 00:04:11.152 LINK stub 00:04:11.152 CC examples/nvme/arbitration/arbitration.o 00:04:11.152 LINK mkfs 00:04:11.152 TEST_HEADER include/spdk/accel.h 00:04:11.152 TEST_HEADER include/spdk/accel_module.h 00:04:11.152 TEST_HEADER include/spdk/assert.h 00:04:11.152 TEST_HEADER include/spdk/barrier.h 00:04:11.152 TEST_HEADER include/spdk/base64.h 00:04:11.152 TEST_HEADER include/spdk/bdev.h 00:04:11.152 TEST_HEADER include/spdk/bdev_module.h 00:04:11.152 TEST_HEADER include/spdk/bdev_zone.h 00:04:11.152 TEST_HEADER include/spdk/bit_array.h 00:04:11.152 TEST_HEADER include/spdk/bit_pool.h 00:04:11.152 TEST_HEADER include/spdk/blob_bdev.h 00:04:11.152 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:11.152 TEST_HEADER include/spdk/blobfs.h 00:04:11.152 TEST_HEADER include/spdk/blob.h 00:04:11.152 TEST_HEADER include/spdk/conf.h 00:04:11.152 TEST_HEADER include/spdk/config.h 00:04:11.152 TEST_HEADER include/spdk/cpuset.h 00:04:11.152 TEST_HEADER include/spdk/crc16.h 00:04:11.152 TEST_HEADER include/spdk/crc32.h 00:04:11.152 TEST_HEADER include/spdk/crc64.h 00:04:11.152 TEST_HEADER include/spdk/dif.h 00:04:11.152 TEST_HEADER include/spdk/dma.h 00:04:11.152 TEST_HEADER include/spdk/endian.h 00:04:11.152 TEST_HEADER include/spdk/env_dpdk.h 00:04:11.152 TEST_HEADER include/spdk/env.h 00:04:11.152 TEST_HEADER include/spdk/event.h 00:04:11.152 TEST_HEADER include/spdk/fd_group.h 00:04:11.152 TEST_HEADER include/spdk/fd.h 00:04:11.152 TEST_HEADER include/spdk/file.h 00:04:11.152 TEST_HEADER include/spdk/fsdev.h 00:04:11.152 TEST_HEADER include/spdk/fsdev_module.h 00:04:11.152 TEST_HEADER include/spdk/ftl.h 00:04:11.152 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:11.152 TEST_HEADER include/spdk/gpt_spec.h 00:04:11.152 TEST_HEADER include/spdk/hexlify.h 00:04:11.152 TEST_HEADER include/spdk/histogram_data.h 00:04:11.153 TEST_HEADER include/spdk/idxd.h 00:04:11.153 TEST_HEADER include/spdk/idxd_spec.h 00:04:11.153 TEST_HEADER include/spdk/init.h 00:04:11.153 TEST_HEADER include/spdk/ioat.h 00:04:11.153 TEST_HEADER include/spdk/ioat_spec.h 00:04:11.153 TEST_HEADER include/spdk/iscsi_spec.h 00:04:11.153 TEST_HEADER include/spdk/json.h 00:04:11.153 TEST_HEADER include/spdk/jsonrpc.h 00:04:11.153 TEST_HEADER include/spdk/keyring.h 00:04:11.153 TEST_HEADER include/spdk/keyring_module.h 00:04:11.153 TEST_HEADER include/spdk/likely.h 00:04:11.153 TEST_HEADER include/spdk/log.h 00:04:11.153 TEST_HEADER include/spdk/lvol.h 00:04:11.153 TEST_HEADER include/spdk/md5.h 00:04:11.153 TEST_HEADER include/spdk/memory.h 00:04:11.153 CC examples/nvme/hotplug/hotplug.o 00:04:11.153 TEST_HEADER include/spdk/mmio.h 00:04:11.153 TEST_HEADER include/spdk/nbd.h 00:04:11.153 TEST_HEADER include/spdk/net.h 00:04:11.153 TEST_HEADER include/spdk/notify.h 00:04:11.153 TEST_HEADER include/spdk/nvme.h 00:04:11.153 TEST_HEADER include/spdk/nvme_intel.h 00:04:11.153 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:11.153 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:11.153 TEST_HEADER include/spdk/nvme_spec.h 00:04:11.153 TEST_HEADER include/spdk/nvme_zns.h 00:04:11.153 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:11.153 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:11.153 TEST_HEADER include/spdk/nvmf.h 00:04:11.153 TEST_HEADER include/spdk/nvmf_spec.h 00:04:11.153 TEST_HEADER include/spdk/nvmf_transport.h 00:04:11.153 TEST_HEADER include/spdk/opal.h 00:04:11.153 TEST_HEADER include/spdk/opal_spec.h 00:04:11.153 TEST_HEADER include/spdk/pci_ids.h 00:04:11.153 TEST_HEADER include/spdk/pipe.h 00:04:11.153 LINK reconnect 00:04:11.153 TEST_HEADER include/spdk/queue.h 00:04:11.153 TEST_HEADER include/spdk/reduce.h 00:04:11.153 TEST_HEADER include/spdk/rpc.h 00:04:11.153 TEST_HEADER include/spdk/scheduler.h 00:04:11.153 TEST_HEADER include/spdk/scsi.h 00:04:11.153 TEST_HEADER include/spdk/scsi_spec.h 00:04:11.412 TEST_HEADER include/spdk/sock.h 00:04:11.412 TEST_HEADER include/spdk/stdinc.h 00:04:11.412 TEST_HEADER include/spdk/string.h 00:04:11.412 TEST_HEADER include/spdk/thread.h 00:04:11.412 TEST_HEADER include/spdk/trace.h 00:04:11.412 TEST_HEADER include/spdk/trace_parser.h 00:04:11.412 TEST_HEADER include/spdk/tree.h 00:04:11.412 TEST_HEADER include/spdk/ublk.h 00:04:11.412 TEST_HEADER include/spdk/util.h 00:04:11.412 TEST_HEADER include/spdk/uuid.h 00:04:11.412 TEST_HEADER include/spdk/version.h 00:04:11.412 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:11.412 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:11.412 TEST_HEADER include/spdk/vhost.h 00:04:11.412 TEST_HEADER include/spdk/vmd.h 00:04:11.412 TEST_HEADER include/spdk/xor.h 00:04:11.412 TEST_HEADER include/spdk/zipf.h 00:04:11.412 CXX test/cpp_headers/accel.o 00:04:11.412 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:11.412 LINK vhost_fuzz 00:04:11.412 LINK arbitration 00:04:11.412 CXX test/cpp_headers/accel_module.o 00:04:11.412 LINK nvme_manage 00:04:11.413 LINK cmb_copy 00:04:11.413 LINK hotplug 00:04:11.672 LINK blobcli 00:04:11.672 CC test/env/mem_callbacks/mem_callbacks.o 00:04:11.672 CXX test/cpp_headers/assert.o 00:04:11.672 CC test/event/reactor/reactor.o 00:04:11.672 CC test/event/event_perf/event_perf.o 00:04:11.672 CC test/env/vtophys/vtophys.o 00:04:11.672 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:11.672 CC examples/nvme/abort/abort.o 00:04:11.672 CC test/env/memory/memory_ut.o 00:04:11.931 CXX test/cpp_headers/barrier.o 00:04:11.931 LINK reactor 00:04:11.931 LINK mem_callbacks 00:04:11.931 LINK event_perf 00:04:11.931 LINK vtophys 00:04:11.931 CC examples/bdev/hello_world/hello_bdev.o 00:04:11.931 LINK env_dpdk_post_init 00:04:11.931 CXX test/cpp_headers/base64.o 00:04:11.931 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:11.931 CC test/event/reactor_perf/reactor_perf.o 00:04:12.191 CC examples/bdev/bdevperf/bdevperf.o 00:04:12.191 CXX test/cpp_headers/bdev.o 00:04:12.191 LINK hello_bdev 00:04:12.191 CC test/env/pci/pci_ut.o 00:04:12.191 LINK abort 00:04:12.191 LINK reactor_perf 00:04:12.191 CC test/event/app_repeat/app_repeat.o 00:04:12.191 LINK pmr_persistence 00:04:12.191 CXX test/cpp_headers/bdev_module.o 00:04:12.451 LINK app_repeat 00:04:12.451 CXX test/cpp_headers/bdev_zone.o 00:04:12.451 CC test/event/scheduler/scheduler.o 00:04:12.451 CC test/rpc_client/rpc_client_test.o 00:04:12.451 CC test/nvme/aer/aer.o 00:04:12.709 CC test/lvol/esnap/esnap.o 00:04:12.709 LINK pci_ut 00:04:12.710 CXX test/cpp_headers/bit_array.o 00:04:12.710 LINK memory_ut 00:04:12.710 LINK rpc_client_test 00:04:12.710 LINK scheduler 00:04:12.710 CC test/accel/dif/dif.o 00:04:12.710 LINK iscsi_fuzz 00:04:12.710 CXX test/cpp_headers/bit_pool.o 00:04:12.969 LINK aer 00:04:12.969 CXX test/cpp_headers/blob_bdev.o 00:04:12.969 CC test/nvme/reset/reset.o 00:04:12.969 CC test/nvme/sgl/sgl.o 00:04:12.969 CC test/nvme/e2edp/nvme_dp.o 00:04:12.969 LINK bdevperf 00:04:12.969 CC test/nvme/overhead/overhead.o 00:04:13.229 CC test/nvme/err_injection/err_injection.o 00:04:13.229 CXX test/cpp_headers/blobfs_bdev.o 00:04:13.229 CC test/nvme/startup/startup.o 00:04:13.229 LINK reset 00:04:13.229 LINK sgl 00:04:13.229 LINK nvme_dp 00:04:13.229 CXX test/cpp_headers/blobfs.o 00:04:13.229 LINK startup 00:04:13.229 LINK err_injection 00:04:13.489 CC examples/nvmf/nvmf/nvmf.o 00:04:13.489 LINK overhead 00:04:13.489 CXX test/cpp_headers/blob.o 00:04:13.489 CXX test/cpp_headers/conf.o 00:04:13.489 CC test/nvme/reserve/reserve.o 00:04:13.489 CC test/nvme/simple_copy/simple_copy.o 00:04:13.489 LINK dif 00:04:13.489 CC test/nvme/connect_stress/connect_stress.o 00:04:13.489 CC test/nvme/boot_partition/boot_partition.o 00:04:13.489 CXX test/cpp_headers/config.o 00:04:13.748 CC test/nvme/compliance/nvme_compliance.o 00:04:13.748 CXX test/cpp_headers/cpuset.o 00:04:13.748 LINK reserve 00:04:13.748 CC test/nvme/fused_ordering/fused_ordering.o 00:04:13.748 LINK boot_partition 00:04:13.748 LINK nvmf 00:04:13.748 CXX test/cpp_headers/crc16.o 00:04:13.748 LINK connect_stress 00:04:13.748 LINK simple_copy 00:04:13.748 CXX test/cpp_headers/crc32.o 00:04:14.007 CXX test/cpp_headers/crc64.o 00:04:14.007 CXX test/cpp_headers/dif.o 00:04:14.007 LINK fused_ordering 00:04:14.007 CC test/nvme/fdp/fdp.o 00:04:14.007 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:14.007 CC test/nvme/cuse/cuse.o 00:04:14.007 CC test/bdev/bdevio/bdevio.o 00:04:14.007 LINK nvme_compliance 00:04:14.007 CXX test/cpp_headers/dma.o 00:04:14.007 CXX test/cpp_headers/endian.o 00:04:14.007 CXX test/cpp_headers/env_dpdk.o 00:04:14.007 CXX test/cpp_headers/env.o 00:04:14.007 LINK doorbell_aers 00:04:14.266 CXX test/cpp_headers/event.o 00:04:14.266 CXX test/cpp_headers/fd_group.o 00:04:14.266 CXX test/cpp_headers/fd.o 00:04:14.266 CXX test/cpp_headers/file.o 00:04:14.266 CXX test/cpp_headers/fsdev.o 00:04:14.266 CXX test/cpp_headers/fsdev_module.o 00:04:14.266 LINK fdp 00:04:14.266 CXX test/cpp_headers/ftl.o 00:04:14.266 CXX test/cpp_headers/fuse_dispatcher.o 00:04:14.266 CXX test/cpp_headers/gpt_spec.o 00:04:14.266 LINK bdevio 00:04:14.266 CXX test/cpp_headers/hexlify.o 00:04:14.525 CXX test/cpp_headers/histogram_data.o 00:04:14.525 CXX test/cpp_headers/idxd.o 00:04:14.525 CXX test/cpp_headers/idxd_spec.o 00:04:14.525 CXX test/cpp_headers/init.o 00:04:14.525 CXX test/cpp_headers/ioat.o 00:04:14.525 CXX test/cpp_headers/ioat_spec.o 00:04:14.525 CXX test/cpp_headers/iscsi_spec.o 00:04:14.525 CXX test/cpp_headers/json.o 00:04:14.525 CXX test/cpp_headers/jsonrpc.o 00:04:14.525 CXX test/cpp_headers/keyring.o 00:04:14.525 CXX test/cpp_headers/keyring_module.o 00:04:14.525 CXX test/cpp_headers/likely.o 00:04:14.785 CXX test/cpp_headers/log.o 00:04:14.785 CXX test/cpp_headers/lvol.o 00:04:14.785 CXX test/cpp_headers/md5.o 00:04:14.785 CXX test/cpp_headers/memory.o 00:04:14.785 CXX test/cpp_headers/mmio.o 00:04:14.785 CXX test/cpp_headers/nbd.o 00:04:14.785 CXX test/cpp_headers/net.o 00:04:14.785 CXX test/cpp_headers/notify.o 00:04:14.785 CXX test/cpp_headers/nvme.o 00:04:14.785 CXX test/cpp_headers/nvme_intel.o 00:04:14.785 CXX test/cpp_headers/nvme_ocssd.o 00:04:14.785 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:15.044 CXX test/cpp_headers/nvme_spec.o 00:04:15.044 CXX test/cpp_headers/nvme_zns.o 00:04:15.044 CXX test/cpp_headers/nvmf_cmd.o 00:04:15.044 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:15.044 CXX test/cpp_headers/nvmf.o 00:04:15.044 CXX test/cpp_headers/nvmf_spec.o 00:04:15.044 CXX test/cpp_headers/nvmf_transport.o 00:04:15.044 CXX test/cpp_headers/opal.o 00:04:15.044 CXX test/cpp_headers/opal_spec.o 00:04:15.044 CXX test/cpp_headers/pci_ids.o 00:04:15.044 CXX test/cpp_headers/pipe.o 00:04:15.044 CXX test/cpp_headers/queue.o 00:04:15.044 CXX test/cpp_headers/reduce.o 00:04:15.044 CXX test/cpp_headers/rpc.o 00:04:15.044 CXX test/cpp_headers/scheduler.o 00:04:15.303 CXX test/cpp_headers/scsi.o 00:04:15.303 CXX test/cpp_headers/scsi_spec.o 00:04:15.303 CXX test/cpp_headers/sock.o 00:04:15.303 CXX test/cpp_headers/stdinc.o 00:04:15.303 CXX test/cpp_headers/string.o 00:04:15.303 CXX test/cpp_headers/thread.o 00:04:15.303 CXX test/cpp_headers/trace.o 00:04:15.303 LINK cuse 00:04:15.303 CXX test/cpp_headers/trace_parser.o 00:04:15.303 CXX test/cpp_headers/tree.o 00:04:15.303 CXX test/cpp_headers/ublk.o 00:04:15.303 CXX test/cpp_headers/util.o 00:04:15.303 CXX test/cpp_headers/uuid.o 00:04:15.303 CXX test/cpp_headers/version.o 00:04:15.303 CXX test/cpp_headers/vfio_user_pci.o 00:04:15.303 CXX test/cpp_headers/vfio_user_spec.o 00:04:15.303 CXX test/cpp_headers/vhost.o 00:04:15.563 CXX test/cpp_headers/vmd.o 00:04:15.563 CXX test/cpp_headers/xor.o 00:04:15.563 CXX test/cpp_headers/zipf.o 00:04:18.141 LINK esnap 00:04:18.709 00:04:18.709 real 1m16.182s 00:04:18.709 user 5m55.819s 00:04:18.709 sys 1m7.360s 00:04:18.709 21:36:41 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:18.709 21:36:41 make -- common/autotest_common.sh@10 -- $ set +x 00:04:18.709 ************************************ 00:04:18.709 END TEST make 00:04:18.709 ************************************ 00:04:18.709 21:36:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:18.709 21:36:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:18.709 21:36:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:18.709 21:36:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.709 21:36:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:18.709 21:36:41 -- pm/common@44 -- $ pid=6199 00:04:18.709 21:36:41 -- pm/common@50 -- $ kill -TERM 6199 00:04:18.709 21:36:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.709 21:36:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:18.709 21:36:41 -- pm/common@44 -- $ pid=6201 00:04:18.709 21:36:41 -- pm/common@50 -- $ kill -TERM 6201 00:04:18.709 21:36:41 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:18.709 21:36:41 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:18.969 21:36:41 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:18.969 21:36:41 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:18.969 21:36:41 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:18.969 21:36:41 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:18.969 21:36:41 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.969 21:36:41 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.969 21:36:41 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.969 21:36:41 -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.969 21:36:41 -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.969 21:36:41 -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.969 21:36:41 -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.969 21:36:41 -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.969 21:36:41 -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.969 21:36:41 -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.969 21:36:41 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.969 21:36:41 -- scripts/common.sh@344 -- # case "$op" in 00:04:18.969 21:36:41 -- scripts/common.sh@345 -- # : 1 00:04:18.969 21:36:41 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.969 21:36:41 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.969 21:36:41 -- scripts/common.sh@365 -- # decimal 1 00:04:18.969 21:36:41 -- scripts/common.sh@353 -- # local d=1 00:04:18.969 21:36:41 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.969 21:36:41 -- scripts/common.sh@355 -- # echo 1 00:04:18.969 21:36:41 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.969 21:36:41 -- scripts/common.sh@366 -- # decimal 2 00:04:18.969 21:36:41 -- scripts/common.sh@353 -- # local d=2 00:04:18.969 21:36:41 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.969 21:36:41 -- scripts/common.sh@355 -- # echo 2 00:04:18.969 21:36:41 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.969 21:36:41 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.969 21:36:41 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.969 21:36:41 -- scripts/common.sh@368 -- # return 0 00:04:18.969 21:36:41 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.969 21:36:41 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:18.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.969 --rc genhtml_branch_coverage=1 00:04:18.969 --rc genhtml_function_coverage=1 00:04:18.969 --rc genhtml_legend=1 00:04:18.969 --rc geninfo_all_blocks=1 00:04:18.969 --rc geninfo_unexecuted_blocks=1 00:04:18.969 00:04:18.969 ' 00:04:18.969 21:36:41 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:18.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.969 --rc genhtml_branch_coverage=1 00:04:18.969 --rc genhtml_function_coverage=1 00:04:18.969 --rc genhtml_legend=1 00:04:18.969 --rc geninfo_all_blocks=1 00:04:18.969 --rc geninfo_unexecuted_blocks=1 00:04:18.969 00:04:18.969 ' 00:04:18.969 21:36:41 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:18.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.969 --rc genhtml_branch_coverage=1 00:04:18.969 --rc genhtml_function_coverage=1 00:04:18.969 --rc genhtml_legend=1 00:04:18.969 --rc geninfo_all_blocks=1 00:04:18.969 --rc geninfo_unexecuted_blocks=1 00:04:18.969 00:04:18.970 ' 00:04:18.970 21:36:41 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:18.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.970 --rc genhtml_branch_coverage=1 00:04:18.970 --rc genhtml_function_coverage=1 00:04:18.970 --rc genhtml_legend=1 00:04:18.970 --rc geninfo_all_blocks=1 00:04:18.970 --rc geninfo_unexecuted_blocks=1 00:04:18.970 00:04:18.970 ' 00:04:18.970 21:36:41 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:18.970 21:36:41 -- nvmf/common.sh@7 -- # uname -s 00:04:18.970 21:36:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:18.970 21:36:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:18.970 21:36:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:18.970 21:36:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:18.970 21:36:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:18.970 21:36:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:18.970 21:36:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:18.970 21:36:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:18.970 21:36:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:18.970 21:36:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:18.970 21:36:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:422ad100-7c9f-4e2b-8d8c-77b3989655bc 00:04:18.970 21:36:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=422ad100-7c9f-4e2b-8d8c-77b3989655bc 00:04:18.970 21:36:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:18.970 21:36:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:18.970 21:36:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:18.970 21:36:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:18.970 21:36:41 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:18.970 21:36:41 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:18.970 21:36:41 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:18.970 21:36:41 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:18.970 21:36:41 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:18.970 21:36:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.970 21:36:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.970 21:36:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.970 21:36:41 -- paths/export.sh@5 -- # export PATH 00:04:18.970 21:36:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.970 21:36:41 -- nvmf/common.sh@51 -- # : 0 00:04:18.970 21:36:41 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:18.970 21:36:41 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:18.970 21:36:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:18.970 21:36:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:18.970 21:36:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:18.970 21:36:41 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:18.970 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:18.970 21:36:41 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:18.970 21:36:41 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:18.970 21:36:41 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:18.970 21:36:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:18.970 21:36:41 -- spdk/autotest.sh@32 -- # uname -s 00:04:18.970 21:36:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:18.970 21:36:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:18.970 21:36:42 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:18.970 21:36:42 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:18.970 21:36:42 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:18.970 21:36:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:18.970 21:36:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:18.970 21:36:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:18.970 21:36:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:18.970 21:36:42 -- spdk/autotest.sh@48 -- # udevadm_pid=66491 00:04:18.970 21:36:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:18.970 21:36:42 -- pm/common@17 -- # local monitor 00:04:18.970 21:36:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.970 21:36:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:18.970 21:36:42 -- pm/common@21 -- # date +%s 00:04:18.970 21:36:42 -- pm/common@25 -- # sleep 1 00:04:18.970 21:36:42 -- pm/common@21 -- # date +%s 00:04:18.970 21:36:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732743402 00:04:18.970 21:36:42 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732743402 00:04:19.230 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732743402_collect-cpu-load.pm.log 00:04:19.230 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732743402_collect-vmstat.pm.log 00:04:20.165 21:36:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:20.165 21:36:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:20.165 21:36:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.165 21:36:43 -- common/autotest_common.sh@10 -- # set +x 00:04:20.165 21:36:43 -- spdk/autotest.sh@59 -- # create_test_list 00:04:20.165 21:36:43 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:20.165 21:36:43 -- common/autotest_common.sh@10 -- # set +x 00:04:20.165 21:36:43 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:20.165 21:36:43 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:20.165 21:36:43 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:20.165 21:36:43 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:20.165 21:36:43 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:20.165 21:36:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:20.165 21:36:43 -- common/autotest_common.sh@1457 -- # uname 00:04:20.165 21:36:43 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:20.165 21:36:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:20.165 21:36:43 -- common/autotest_common.sh@1477 -- # uname 00:04:20.165 21:36:43 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:20.165 21:36:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:20.165 21:36:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:20.165 lcov: LCOV version 1.15 00:04:20.166 21:36:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:35.053 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:35.053 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:49.938 21:37:11 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:49.938 21:37:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.938 21:37:11 -- common/autotest_common.sh@10 -- # set +x 00:04:49.938 21:37:11 -- spdk/autotest.sh@78 -- # rm -f 00:04:49.938 21:37:11 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:49.938 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.938 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:49.938 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:49.938 21:37:12 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:49.938 21:37:12 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:49.938 21:37:12 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:49.938 21:37:12 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:49.938 21:37:12 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:49.938 21:37:12 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:49.938 21:37:12 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:49.938 21:37:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.938 21:37:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:49.938 21:37:12 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:49.938 21:37:12 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:49.938 21:37:12 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:49.938 21:37:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:49.938 21:37:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:49.938 21:37:12 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:49.938 21:37:12 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:04:49.938 21:37:12 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:49.938 21:37:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:49.938 21:37:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:49.938 21:37:12 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:49.938 21:37:12 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:04:49.938 21:37:12 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:49.938 21:37:12 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:49.938 21:37:12 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:49.938 21:37:12 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:49.938 21:37:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:49.938 21:37:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:49.938 21:37:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:49.938 21:37:12 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:49.938 21:37:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:49.938 No valid GPT data, bailing 00:04:49.938 21:37:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:49.938 21:37:12 -- scripts/common.sh@394 -- # pt= 00:04:49.938 21:37:12 -- scripts/common.sh@395 -- # return 1 00:04:49.938 21:37:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:49.938 1+0 records in 00:04:49.938 1+0 records out 00:04:49.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0068623 s, 153 MB/s 00:04:49.938 21:37:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:49.938 21:37:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:49.938 21:37:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:49.938 21:37:12 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:49.938 21:37:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:49.938 No valid GPT data, bailing 00:04:49.938 21:37:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:49.938 21:37:12 -- scripts/common.sh@394 -- # pt= 00:04:49.938 21:37:12 -- scripts/common.sh@395 -- # return 1 00:04:49.938 21:37:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:49.938 1+0 records in 00:04:49.938 1+0 records out 00:04:49.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00645453 s, 162 MB/s 00:04:49.938 21:37:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:49.938 21:37:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:49.938 21:37:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:49.938 21:37:12 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:49.938 21:37:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:49.938 No valid GPT data, bailing 00:04:49.938 21:37:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:49.938 21:37:12 -- scripts/common.sh@394 -- # pt= 00:04:49.938 21:37:12 -- scripts/common.sh@395 -- # return 1 00:04:49.938 21:37:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:49.938 1+0 records in 00:04:49.938 1+0 records out 00:04:49.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432584 s, 242 MB/s 00:04:49.938 21:37:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:49.938 21:37:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:49.938 21:37:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:49.938 21:37:12 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:49.938 21:37:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:49.938 No valid GPT data, bailing 00:04:49.938 21:37:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:49.938 21:37:12 -- scripts/common.sh@394 -- # pt= 00:04:49.938 21:37:12 -- scripts/common.sh@395 -- # return 1 00:04:49.938 21:37:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:49.938 1+0 records in 00:04:49.938 1+0 records out 00:04:49.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00649334 s, 161 MB/s 00:04:49.938 21:37:12 -- spdk/autotest.sh@105 -- # sync 00:04:49.938 21:37:12 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:49.938 21:37:12 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:49.938 21:37:12 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:53.230 21:37:15 -- spdk/autotest.sh@111 -- # uname -s 00:04:53.230 21:37:15 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:53.230 21:37:15 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:53.230 21:37:15 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:53.806 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:53.806 Hugepages 00:04:53.806 node hugesize free / total 00:04:53.806 node0 1048576kB 0 / 0 00:04:53.806 node0 2048kB 0 / 0 00:04:53.806 00:04:53.806 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:53.806 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:53.806 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:54.067 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:54.067 21:37:16 -- spdk/autotest.sh@117 -- # uname -s 00:04:54.067 21:37:16 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:54.067 21:37:16 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:54.067 21:37:16 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:55.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:55.005 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.005 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:55.005 21:37:17 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:55.943 21:37:19 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:55.943 21:37:19 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:55.943 21:37:19 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:55.943 21:37:19 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:55.943 21:37:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:55.943 21:37:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:55.943 21:37:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:55.943 21:37:19 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:55.943 21:37:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:56.202 21:37:19 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:56.202 21:37:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:56.202 21:37:19 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:56.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:56.462 Waiting for block devices as requested 00:04:56.720 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:56.720 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:56.720 21:37:19 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:56.720 21:37:19 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:56.720 21:37:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:56.720 21:37:19 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:56.720 21:37:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:56.720 21:37:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:56.720 21:37:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:56.720 21:37:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:56.720 21:37:19 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:56.720 21:37:19 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:56.720 21:37:19 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:56.720 21:37:19 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:56.720 21:37:19 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:56.977 21:37:19 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:56.977 21:37:19 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:56.977 21:37:19 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:56.977 21:37:19 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:56.977 21:37:19 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:56.977 21:37:19 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:56.977 21:37:19 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:56.977 21:37:19 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:56.977 21:37:19 -- common/autotest_common.sh@1543 -- # continue 00:04:56.977 21:37:19 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:56.977 21:37:19 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:56.977 21:37:19 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:56.977 21:37:19 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:56.977 21:37:19 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:56.977 21:37:19 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:56.977 21:37:19 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:56.977 21:37:19 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:56.977 21:37:19 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:56.977 21:37:19 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:56.977 21:37:19 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:56.977 21:37:19 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:56.977 21:37:19 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:56.977 21:37:19 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:56.977 21:37:19 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:56.977 21:37:19 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:56.977 21:37:19 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:56.977 21:37:19 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:56.977 21:37:19 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:56.977 21:37:19 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:56.977 21:37:19 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:56.977 21:37:19 -- common/autotest_common.sh@1543 -- # continue 00:04:56.977 21:37:19 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:56.977 21:37:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:56.977 21:37:19 -- common/autotest_common.sh@10 -- # set +x 00:04:56.977 21:37:19 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:56.977 21:37:19 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.977 21:37:19 -- common/autotest_common.sh@10 -- # set +x 00:04:56.977 21:37:19 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.912 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:57.912 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.912 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:57.912 21:37:20 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:57.912 21:37:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.912 21:37:20 -- common/autotest_common.sh@10 -- # set +x 00:04:57.912 21:37:20 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:57.912 21:37:20 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:57.912 21:37:20 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:57.912 21:37:20 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:57.912 21:37:20 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:57.912 21:37:20 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:57.912 21:37:20 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:57.912 21:37:20 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:57.912 21:37:20 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:57.912 21:37:20 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:57.912 21:37:20 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.912 21:37:21 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:57.912 21:37:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:58.171 21:37:21 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:58.171 21:37:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:58.171 21:37:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:58.171 21:37:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:58.171 21:37:21 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:58.171 21:37:21 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.171 21:37:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:58.171 21:37:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:58.171 21:37:21 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:58.171 21:37:21 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.171 21:37:21 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:58.171 21:37:21 -- common/autotest_common.sh@1572 -- # return 0 00:04:58.171 21:37:21 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:58.171 21:37:21 -- common/autotest_common.sh@1580 -- # return 0 00:04:58.171 21:37:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:58.171 21:37:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:58.171 21:37:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:58.171 21:37:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:58.171 21:37:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:58.171 21:37:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:58.171 21:37:21 -- common/autotest_common.sh@10 -- # set +x 00:04:58.171 21:37:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:58.171 21:37:21 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:58.171 21:37:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.171 21:37:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.171 21:37:21 -- common/autotest_common.sh@10 -- # set +x 00:04:58.171 ************************************ 00:04:58.171 START TEST env 00:04:58.171 ************************************ 00:04:58.171 21:37:21 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:58.171 * Looking for test storage... 00:04:58.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:58.171 21:37:21 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:58.171 21:37:21 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:58.171 21:37:21 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:58.430 21:37:21 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:58.430 21:37:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.430 21:37:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.430 21:37:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.430 21:37:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.430 21:37:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.430 21:37:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.430 21:37:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.430 21:37:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.430 21:37:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.430 21:37:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.430 21:37:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.430 21:37:21 env -- scripts/common.sh@344 -- # case "$op" in 00:04:58.430 21:37:21 env -- scripts/common.sh@345 -- # : 1 00:04:58.430 21:37:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.430 21:37:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.430 21:37:21 env -- scripts/common.sh@365 -- # decimal 1 00:04:58.430 21:37:21 env -- scripts/common.sh@353 -- # local d=1 00:04:58.430 21:37:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.430 21:37:21 env -- scripts/common.sh@355 -- # echo 1 00:04:58.430 21:37:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.430 21:37:21 env -- scripts/common.sh@366 -- # decimal 2 00:04:58.430 21:37:21 env -- scripts/common.sh@353 -- # local d=2 00:04:58.430 21:37:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.430 21:37:21 env -- scripts/common.sh@355 -- # echo 2 00:04:58.430 21:37:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.430 21:37:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.430 21:37:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.430 21:37:21 env -- scripts/common.sh@368 -- # return 0 00:04:58.430 21:37:21 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.430 21:37:21 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:58.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.430 --rc genhtml_branch_coverage=1 00:04:58.430 --rc genhtml_function_coverage=1 00:04:58.430 --rc genhtml_legend=1 00:04:58.430 --rc geninfo_all_blocks=1 00:04:58.430 --rc geninfo_unexecuted_blocks=1 00:04:58.430 00:04:58.430 ' 00:04:58.430 21:37:21 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:58.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.430 --rc genhtml_branch_coverage=1 00:04:58.430 --rc genhtml_function_coverage=1 00:04:58.430 --rc genhtml_legend=1 00:04:58.430 --rc geninfo_all_blocks=1 00:04:58.430 --rc geninfo_unexecuted_blocks=1 00:04:58.430 00:04:58.430 ' 00:04:58.430 21:37:21 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:58.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.430 --rc genhtml_branch_coverage=1 00:04:58.430 --rc genhtml_function_coverage=1 00:04:58.430 --rc genhtml_legend=1 00:04:58.430 --rc geninfo_all_blocks=1 00:04:58.430 --rc geninfo_unexecuted_blocks=1 00:04:58.430 00:04:58.430 ' 00:04:58.430 21:37:21 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:58.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.430 --rc genhtml_branch_coverage=1 00:04:58.430 --rc genhtml_function_coverage=1 00:04:58.430 --rc genhtml_legend=1 00:04:58.430 --rc geninfo_all_blocks=1 00:04:58.430 --rc geninfo_unexecuted_blocks=1 00:04:58.430 00:04:58.430 ' 00:04:58.430 21:37:21 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:58.430 21:37:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.430 21:37:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.430 21:37:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.430 ************************************ 00:04:58.430 START TEST env_memory 00:04:58.430 ************************************ 00:04:58.430 21:37:21 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:58.430 00:04:58.430 00:04:58.430 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.431 http://cunit.sourceforge.net/ 00:04:58.431 00:04:58.431 00:04:58.431 Suite: memory 00:04:58.431 Test: alloc and free memory map ...[2024-11-27 21:37:21.438682] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:58.431 passed 00:04:58.431 Test: mem map translation ...[2024-11-27 21:37:21.484558] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:58.431 [2024-11-27 21:37:21.484642] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:58.431 [2024-11-27 21:37:21.484742] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:58.431 [2024-11-27 21:37:21.484778] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:58.431 passed 00:04:58.689 Test: mem map registration ...[2024-11-27 21:37:21.554637] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:58.689 [2024-11-27 21:37:21.554728] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:58.689 passed 00:04:58.689 Test: mem map adjacent registrations ...passed 00:04:58.689 00:04:58.689 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.689 suites 1 1 n/a 0 0 00:04:58.689 tests 4 4 4 0 0 00:04:58.689 asserts 152 152 152 0 n/a 00:04:58.689 00:04:58.689 Elapsed time = 0.252 seconds 00:04:58.689 00:04:58.689 real 0m0.306s 00:04:58.689 user 0m0.264s 00:04:58.689 sys 0m0.030s 00:04:58.689 21:37:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.689 21:37:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:58.689 ************************************ 00:04:58.689 END TEST env_memory 00:04:58.689 ************************************ 00:04:58.689 21:37:21 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:58.689 21:37:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.689 21:37:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.689 21:37:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.689 ************************************ 00:04:58.689 START TEST env_vtophys 00:04:58.689 ************************************ 00:04:58.689 21:37:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:58.689 EAL: lib.eal log level changed from notice to debug 00:04:58.689 EAL: Detected lcore 0 as core 0 on socket 0 00:04:58.689 EAL: Detected lcore 1 as core 0 on socket 0 00:04:58.689 EAL: Detected lcore 2 as core 0 on socket 0 00:04:58.689 EAL: Detected lcore 3 as core 0 on socket 0 00:04:58.689 EAL: Detected lcore 4 as core 0 on socket 0 00:04:58.689 EAL: Detected lcore 5 as core 0 on socket 0 00:04:58.689 EAL: Detected lcore 6 as core 0 on socket 0 00:04:58.689 EAL: Detected lcore 7 as core 0 on socket 0 00:04:58.689 EAL: Detected lcore 8 as core 0 on socket 0 00:04:58.689 EAL: Detected lcore 9 as core 0 on socket 0 00:04:58.689 EAL: Maximum logical cores by configuration: 128 00:04:58.689 EAL: Detected CPU lcores: 10 00:04:58.689 EAL: Detected NUMA nodes: 1 00:04:58.689 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:04:58.689 EAL: Detected shared linkage of DPDK 00:04:58.689 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:04:58.689 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:04:58.689 EAL: Registered [vdev] bus. 00:04:58.689 EAL: bus.vdev log level changed from disabled to notice 00:04:58.689 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:04:58.689 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:04:58.689 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:58.689 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:58.689 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:04:58.689 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:04:58.689 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:04:58.689 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:04:58.689 EAL: No shared files mode enabled, IPC will be disabled 00:04:58.689 EAL: No shared files mode enabled, IPC is disabled 00:04:58.689 EAL: Selected IOVA mode 'PA' 00:04:58.689 EAL: Probing VFIO support... 00:04:58.689 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:58.689 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:58.689 EAL: Ask a virtual area of 0x2e000 bytes 00:04:58.689 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:58.689 EAL: Setting up physically contiguous memory... 00:04:58.689 EAL: Setting maximum number of open files to 524288 00:04:58.689 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:58.689 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:58.689 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.689 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:58.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.689 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.689 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:58.689 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:58.689 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.689 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:58.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.689 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.689 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:58.689 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:58.689 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.689 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:58.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.689 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.689 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:58.689 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:58.689 EAL: Ask a virtual area of 0x61000 bytes 00:04:58.689 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:58.689 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:58.689 EAL: Ask a virtual area of 0x400000000 bytes 00:04:58.689 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:58.689 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:58.689 EAL: Hugepages will be freed exactly as allocated. 00:04:58.689 EAL: No shared files mode enabled, IPC is disabled 00:04:58.689 EAL: No shared files mode enabled, IPC is disabled 00:04:58.949 EAL: TSC frequency is ~2290000 KHz 00:04:58.949 EAL: Main lcore 0 is ready (tid=7fb8464dca40;cpuset=[0]) 00:04:58.949 EAL: Trying to obtain current memory policy. 00:04:58.949 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:58.949 EAL: Restoring previous memory policy: 0 00:04:58.949 EAL: request: mp_malloc_sync 00:04:58.949 EAL: No shared files mode enabled, IPC is disabled 00:04:58.949 EAL: Heap on socket 0 was expanded by 2MB 00:04:58.949 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:58.949 EAL: No shared files mode enabled, IPC is disabled 00:04:58.949 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:58.949 EAL: Mem event callback 'spdk:(nil)' registered 00:04:58.949 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:58.949 00:04:58.949 00:04:58.949 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.949 http://cunit.sourceforge.net/ 00:04:58.949 00:04:58.949 00:04:58.949 Suite: components_suite 00:04:59.208 Test: vtophys_malloc_test ...passed 00:04:59.208 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:59.208 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.208 EAL: Restoring previous memory policy: 4 00:04:59.208 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.208 EAL: request: mp_malloc_sync 00:04:59.208 EAL: No shared files mode enabled, IPC is disabled 00:04:59.208 EAL: Heap on socket 0 was expanded by 4MB 00:04:59.208 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.208 EAL: request: mp_malloc_sync 00:04:59.208 EAL: No shared files mode enabled, IPC is disabled 00:04:59.208 EAL: Heap on socket 0 was shrunk by 4MB 00:04:59.208 EAL: Trying to obtain current memory policy. 00:04:59.208 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.208 EAL: Restoring previous memory policy: 4 00:04:59.208 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.208 EAL: request: mp_malloc_sync 00:04:59.208 EAL: No shared files mode enabled, IPC is disabled 00:04:59.208 EAL: Heap on socket 0 was expanded by 6MB 00:04:59.208 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.208 EAL: request: mp_malloc_sync 00:04:59.208 EAL: No shared files mode enabled, IPC is disabled 00:04:59.208 EAL: Heap on socket 0 was shrunk by 6MB 00:04:59.208 EAL: Trying to obtain current memory policy. 00:04:59.208 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.208 EAL: Restoring previous memory policy: 4 00:04:59.208 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.208 EAL: request: mp_malloc_sync 00:04:59.208 EAL: No shared files mode enabled, IPC is disabled 00:04:59.208 EAL: Heap on socket 0 was expanded by 10MB 00:04:59.208 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.208 EAL: request: mp_malloc_sync 00:04:59.208 EAL: No shared files mode enabled, IPC is disabled 00:04:59.208 EAL: Heap on socket 0 was shrunk by 10MB 00:04:59.208 EAL: Trying to obtain current memory policy. 00:04:59.208 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.208 EAL: Restoring previous memory policy: 4 00:04:59.208 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.208 EAL: request: mp_malloc_sync 00:04:59.208 EAL: No shared files mode enabled, IPC is disabled 00:04:59.208 EAL: Heap on socket 0 was expanded by 18MB 00:04:59.208 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.208 EAL: request: mp_malloc_sync 00:04:59.208 EAL: No shared files mode enabled, IPC is disabled 00:04:59.208 EAL: Heap on socket 0 was shrunk by 18MB 00:04:59.208 EAL: Trying to obtain current memory policy. 00:04:59.208 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.468 EAL: Restoring previous memory policy: 4 00:04:59.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.468 EAL: request: mp_malloc_sync 00:04:59.468 EAL: No shared files mode enabled, IPC is disabled 00:04:59.468 EAL: Heap on socket 0 was expanded by 34MB 00:04:59.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.468 EAL: request: mp_malloc_sync 00:04:59.468 EAL: No shared files mode enabled, IPC is disabled 00:04:59.468 EAL: Heap on socket 0 was shrunk by 34MB 00:04:59.468 EAL: Trying to obtain current memory policy. 00:04:59.468 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.468 EAL: Restoring previous memory policy: 4 00:04:59.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.468 EAL: request: mp_malloc_sync 00:04:59.468 EAL: No shared files mode enabled, IPC is disabled 00:04:59.468 EAL: Heap on socket 0 was expanded by 66MB 00:04:59.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.468 EAL: request: mp_malloc_sync 00:04:59.468 EAL: No shared files mode enabled, IPC is disabled 00:04:59.468 EAL: Heap on socket 0 was shrunk by 66MB 00:04:59.468 EAL: Trying to obtain current memory policy. 00:04:59.468 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.468 EAL: Restoring previous memory policy: 4 00:04:59.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.468 EAL: request: mp_malloc_sync 00:04:59.468 EAL: No shared files mode enabled, IPC is disabled 00:04:59.468 EAL: Heap on socket 0 was expanded by 130MB 00:04:59.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.468 EAL: request: mp_malloc_sync 00:04:59.468 EAL: No shared files mode enabled, IPC is disabled 00:04:59.468 EAL: Heap on socket 0 was shrunk by 130MB 00:04:59.468 EAL: Trying to obtain current memory policy. 00:04:59.468 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.468 EAL: Restoring previous memory policy: 4 00:04:59.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.468 EAL: request: mp_malloc_sync 00:04:59.468 EAL: No shared files mode enabled, IPC is disabled 00:04:59.468 EAL: Heap on socket 0 was expanded by 258MB 00:04:59.468 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.468 EAL: request: mp_malloc_sync 00:04:59.468 EAL: No shared files mode enabled, IPC is disabled 00:04:59.468 EAL: Heap on socket 0 was shrunk by 258MB 00:04:59.468 EAL: Trying to obtain current memory policy. 00:04:59.468 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.726 EAL: Restoring previous memory policy: 4 00:04:59.726 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.726 EAL: request: mp_malloc_sync 00:04:59.726 EAL: No shared files mode enabled, IPC is disabled 00:04:59.726 EAL: Heap on socket 0 was expanded by 514MB 00:04:59.726 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.985 EAL: request: mp_malloc_sync 00:04:59.985 EAL: No shared files mode enabled, IPC is disabled 00:04:59.985 EAL: Heap on socket 0 was shrunk by 514MB 00:04:59.985 EAL: Trying to obtain current memory policy. 00:04:59.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.985 EAL: Restoring previous memory policy: 4 00:04:59.985 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.985 EAL: request: mp_malloc_sync 00:04:59.985 EAL: No shared files mode enabled, IPC is disabled 00:04:59.985 EAL: Heap on socket 0 was expanded by 1026MB 00:05:00.261 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.545 passed 00:05:00.545 00:05:00.545 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.545 suites 1 1 n/a 0 0 00:05:00.545 tests 2 2 2 0 0 00:05:00.545 asserts 5624 5624 5624 0 n/a 00:05:00.545 00:05:00.545 Elapsed time = 1.421 seconds 00:05:00.545 EAL: request: mp_malloc_sync 00:05:00.545 EAL: No shared files mode enabled, IPC is disabled 00:05:00.545 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:00.545 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.545 EAL: request: mp_malloc_sync 00:05:00.545 EAL: No shared files mode enabled, IPC is disabled 00:05:00.545 EAL: Heap on socket 0 was shrunk by 2MB 00:05:00.545 EAL: No shared files mode enabled, IPC is disabled 00:05:00.545 EAL: No shared files mode enabled, IPC is disabled 00:05:00.545 EAL: No shared files mode enabled, IPC is disabled 00:05:00.545 00:05:00.545 real 0m1.681s 00:05:00.545 user 0m0.803s 00:05:00.545 sys 0m0.739s 00:05:00.545 21:37:23 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.545 21:37:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:00.545 ************************************ 00:05:00.545 END TEST env_vtophys 00:05:00.545 ************************************ 00:05:00.545 21:37:23 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:00.545 21:37:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.545 21:37:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.545 21:37:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.545 ************************************ 00:05:00.545 START TEST env_pci 00:05:00.545 ************************************ 00:05:00.545 21:37:23 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:00.545 00:05:00.545 00:05:00.545 CUnit - A unit testing framework for C - Version 2.1-3 00:05:00.545 http://cunit.sourceforge.net/ 00:05:00.545 00:05:00.545 00:05:00.545 Suite: pci 00:05:00.545 Test: pci_hook ...[2024-11-27 21:37:23.517672] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68757 has claimed it 00:05:00.545 passed 00:05:00.545 00:05:00.545 Run Summary: Type Total Ran Passed Failed Inactive 00:05:00.545 suites 1 1 n/a 0 0 00:05:00.545 tests 1 1 1 0 0 00:05:00.545 asserts 25 25 25 0 n/a 00:05:00.545 00:05:00.545 Elapsed time = 0.006 secondsEAL: Cannot find device (10000:00:01.0) 00:05:00.545 EAL: Failed to attach device on primary process 00:05:00.545 00:05:00.545 00:05:00.545 real 0m0.095s 00:05:00.545 user 0m0.047s 00:05:00.545 sys 0m0.047s 00:05:00.545 21:37:23 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.545 21:37:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:00.545 ************************************ 00:05:00.545 END TEST env_pci 00:05:00.545 ************************************ 00:05:00.545 21:37:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:00.545 21:37:23 env -- env/env.sh@15 -- # uname 00:05:00.545 21:37:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:00.545 21:37:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:00.545 21:37:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.545 21:37:23 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:00.545 21:37:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.545 21:37:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:00.545 ************************************ 00:05:00.545 START TEST env_dpdk_post_init 00:05:00.545 ************************************ 00:05:00.545 21:37:23 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:00.804 EAL: Detected CPU lcores: 10 00:05:00.804 EAL: Detected NUMA nodes: 1 00:05:00.804 EAL: Detected shared linkage of DPDK 00:05:00.804 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:00.804 EAL: Selected IOVA mode 'PA' 00:05:00.804 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:00.804 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:00.804 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:00.804 Starting DPDK initialization... 00:05:00.804 Starting SPDK post initialization... 00:05:00.804 SPDK NVMe probe 00:05:00.804 Attaching to 0000:00:10.0 00:05:00.804 Attaching to 0000:00:11.0 00:05:00.804 Attached to 0000:00:10.0 00:05:00.804 Attached to 0000:00:11.0 00:05:00.804 Cleaning up... 00:05:00.804 00:05:00.804 real 0m0.246s 00:05:00.804 user 0m0.078s 00:05:00.804 sys 0m0.069s 00:05:00.804 21:37:23 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.804 21:37:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.804 ************************************ 00:05:00.804 END TEST env_dpdk_post_init 00:05:00.804 ************************************ 00:05:01.063 21:37:23 env -- env/env.sh@26 -- # uname 00:05:01.063 21:37:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:01.063 21:37:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.063 21:37:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.063 21:37:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.063 21:37:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.063 ************************************ 00:05:01.063 START TEST env_mem_callbacks 00:05:01.063 ************************************ 00:05:01.063 21:37:23 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:01.063 EAL: Detected CPU lcores: 10 00:05:01.063 EAL: Detected NUMA nodes: 1 00:05:01.063 EAL: Detected shared linkage of DPDK 00:05:01.063 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:01.063 EAL: Selected IOVA mode 'PA' 00:05:01.063 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:01.063 00:05:01.063 00:05:01.063 CUnit - A unit testing framework for C - Version 2.1-3 00:05:01.063 http://cunit.sourceforge.net/ 00:05:01.063 00:05:01.063 00:05:01.063 Suite: memory 00:05:01.063 Test: test ... 00:05:01.063 register 0x200000200000 2097152 00:05:01.063 malloc 3145728 00:05:01.063 register 0x200000400000 4194304 00:05:01.063 buf 0x200000500000 len 3145728 PASSED 00:05:01.063 malloc 64 00:05:01.063 buf 0x2000004fff40 len 64 PASSED 00:05:01.063 malloc 4194304 00:05:01.063 register 0x200000800000 6291456 00:05:01.063 buf 0x200000a00000 len 4194304 PASSED 00:05:01.063 free 0x200000500000 3145728 00:05:01.063 free 0x2000004fff40 64 00:05:01.063 unregister 0x200000400000 4194304 PASSED 00:05:01.063 free 0x200000a00000 4194304 00:05:01.063 unregister 0x200000800000 6291456 PASSED 00:05:01.063 malloc 8388608 00:05:01.063 register 0x200000400000 10485760 00:05:01.063 buf 0x200000600000 len 8388608 PASSED 00:05:01.063 free 0x200000600000 8388608 00:05:01.063 unregister 0x200000400000 10485760 PASSED 00:05:01.063 passed 00:05:01.063 00:05:01.063 Run Summary: Type Total Ran Passed Failed Inactive 00:05:01.063 suites 1 1 n/a 0 0 00:05:01.063 tests 1 1 1 0 0 00:05:01.063 asserts 15 15 15 0 n/a 00:05:01.063 00:05:01.063 Elapsed time = 0.011 seconds 00:05:01.063 00:05:01.063 real 0m0.181s 00:05:01.063 user 0m0.033s 00:05:01.063 sys 0m0.048s 00:05:01.063 21:37:24 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.063 21:37:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:01.063 ************************************ 00:05:01.063 END TEST env_mem_callbacks 00:05:01.063 ************************************ 00:05:01.322 00:05:01.322 real 0m3.079s 00:05:01.322 user 0m1.456s 00:05:01.322 sys 0m1.296s 00:05:01.322 21:37:24 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.322 21:37:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:01.322 ************************************ 00:05:01.322 END TEST env 00:05:01.322 ************************************ 00:05:01.322 21:37:24 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.322 21:37:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.322 21:37:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.322 21:37:24 -- common/autotest_common.sh@10 -- # set +x 00:05:01.322 ************************************ 00:05:01.322 START TEST rpc 00:05:01.322 ************************************ 00:05:01.322 21:37:24 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:01.322 * Looking for test storage... 00:05:01.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:01.322 21:37:24 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:01.322 21:37:24 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:01.322 21:37:24 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:01.581 21:37:24 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:01.581 21:37:24 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:01.581 21:37:24 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:01.581 21:37:24 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:01.581 21:37:24 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:01.581 21:37:24 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:01.581 21:37:24 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:01.581 21:37:24 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:01.581 21:37:24 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:01.581 21:37:24 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:01.581 21:37:24 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:01.581 21:37:24 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:01.581 21:37:24 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:01.581 21:37:24 rpc -- scripts/common.sh@345 -- # : 1 00:05:01.581 21:37:24 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:01.581 21:37:24 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:01.581 21:37:24 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:01.581 21:37:24 rpc -- scripts/common.sh@353 -- # local d=1 00:05:01.581 21:37:24 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:01.581 21:37:24 rpc -- scripts/common.sh@355 -- # echo 1 00:05:01.581 21:37:24 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:01.581 21:37:24 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:01.581 21:37:24 rpc -- scripts/common.sh@353 -- # local d=2 00:05:01.581 21:37:24 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:01.582 21:37:24 rpc -- scripts/common.sh@355 -- # echo 2 00:05:01.582 21:37:24 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:01.582 21:37:24 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:01.582 21:37:24 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:01.582 21:37:24 rpc -- scripts/common.sh@368 -- # return 0 00:05:01.582 21:37:24 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:01.582 21:37:24 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:01.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.582 --rc genhtml_branch_coverage=1 00:05:01.582 --rc genhtml_function_coverage=1 00:05:01.582 --rc genhtml_legend=1 00:05:01.582 --rc geninfo_all_blocks=1 00:05:01.582 --rc geninfo_unexecuted_blocks=1 00:05:01.582 00:05:01.582 ' 00:05:01.582 21:37:24 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:01.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.582 --rc genhtml_branch_coverage=1 00:05:01.582 --rc genhtml_function_coverage=1 00:05:01.582 --rc genhtml_legend=1 00:05:01.582 --rc geninfo_all_blocks=1 00:05:01.582 --rc geninfo_unexecuted_blocks=1 00:05:01.582 00:05:01.582 ' 00:05:01.582 21:37:24 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:01.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.582 --rc genhtml_branch_coverage=1 00:05:01.582 --rc genhtml_function_coverage=1 00:05:01.582 --rc genhtml_legend=1 00:05:01.582 --rc geninfo_all_blocks=1 00:05:01.582 --rc geninfo_unexecuted_blocks=1 00:05:01.582 00:05:01.582 ' 00:05:01.582 21:37:24 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:01.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:01.582 --rc genhtml_branch_coverage=1 00:05:01.582 --rc genhtml_function_coverage=1 00:05:01.582 --rc genhtml_legend=1 00:05:01.582 --rc geninfo_all_blocks=1 00:05:01.582 --rc geninfo_unexecuted_blocks=1 00:05:01.582 00:05:01.582 ' 00:05:01.582 21:37:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68884 00:05:01.582 21:37:24 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:01.582 21:37:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.582 21:37:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68884 00:05:01.582 21:37:24 rpc -- common/autotest_common.sh@835 -- # '[' -z 68884 ']' 00:05:01.582 21:37:24 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.582 21:37:24 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.582 21:37:24 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.582 21:37:24 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.582 21:37:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.582 [2024-11-27 21:37:24.599773] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:01.582 [2024-11-27 21:37:24.599930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68884 ] 00:05:01.841 [2024-11-27 21:37:24.737393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.841 [2024-11-27 21:37:24.763711] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:01.841 [2024-11-27 21:37:24.763786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68884' to capture a snapshot of events at runtime. 00:05:01.841 [2024-11-27 21:37:24.763807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:01.841 [2024-11-27 21:37:24.763816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:01.841 [2024-11-27 21:37:24.763834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68884 for offline analysis/debug. 00:05:01.841 [2024-11-27 21:37:24.764233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.409 21:37:25 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.409 21:37:25 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:02.409 21:37:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.409 21:37:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.409 21:37:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:02.409 21:37:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:02.409 21:37:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.409 21:37:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.409 21:37:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.409 ************************************ 00:05:02.409 START TEST rpc_integrity 00:05:02.409 ************************************ 00:05:02.409 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:02.409 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:02.409 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.409 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.409 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.409 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:02.409 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:02.409 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:02.409 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:02.409 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.409 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.409 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.409 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:02.409 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:02.409 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.409 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.668 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.668 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:02.668 { 00:05:02.668 "name": "Malloc0", 00:05:02.668 "aliases": [ 00:05:02.668 "7f3f81d3-993d-438e-ae4e-3ee8a237d6f5" 00:05:02.668 ], 00:05:02.668 "product_name": "Malloc disk", 00:05:02.668 "block_size": 512, 00:05:02.668 "num_blocks": 16384, 00:05:02.668 "uuid": "7f3f81d3-993d-438e-ae4e-3ee8a237d6f5", 00:05:02.668 "assigned_rate_limits": { 00:05:02.668 "rw_ios_per_sec": 0, 00:05:02.668 "rw_mbytes_per_sec": 0, 00:05:02.668 "r_mbytes_per_sec": 0, 00:05:02.668 "w_mbytes_per_sec": 0 00:05:02.668 }, 00:05:02.668 "claimed": false, 00:05:02.668 "zoned": false, 00:05:02.668 "supported_io_types": { 00:05:02.668 "read": true, 00:05:02.668 "write": true, 00:05:02.668 "unmap": true, 00:05:02.668 "flush": true, 00:05:02.668 "reset": true, 00:05:02.668 "nvme_admin": false, 00:05:02.668 "nvme_io": false, 00:05:02.668 "nvme_io_md": false, 00:05:02.668 "write_zeroes": true, 00:05:02.668 "zcopy": true, 00:05:02.668 "get_zone_info": false, 00:05:02.668 "zone_management": false, 00:05:02.668 "zone_append": false, 00:05:02.668 "compare": false, 00:05:02.668 "compare_and_write": false, 00:05:02.668 "abort": true, 00:05:02.668 "seek_hole": false, 00:05:02.668 "seek_data": false, 00:05:02.668 "copy": true, 00:05:02.668 "nvme_iov_md": false 00:05:02.668 }, 00:05:02.668 "memory_domains": [ 00:05:02.668 { 00:05:02.668 "dma_device_id": "system", 00:05:02.668 "dma_device_type": 1 00:05:02.668 }, 00:05:02.668 { 00:05:02.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.668 "dma_device_type": 2 00:05:02.668 } 00:05:02.668 ], 00:05:02.668 "driver_specific": {} 00:05:02.668 } 00:05:02.668 ]' 00:05:02.668 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:02.668 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:02.668 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.669 [2024-11-27 21:37:25.596107] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:02.669 [2024-11-27 21:37:25.596206] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:02.669 [2024-11-27 21:37:25.596248] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006c80 00:05:02.669 [2024-11-27 21:37:25.596259] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:02.669 [2024-11-27 21:37:25.598831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:02.669 [2024-11-27 21:37:25.598885] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:02.669 Passthru0 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.669 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.669 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:02.669 { 00:05:02.669 "name": "Malloc0", 00:05:02.669 "aliases": [ 00:05:02.669 "7f3f81d3-993d-438e-ae4e-3ee8a237d6f5" 00:05:02.669 ], 00:05:02.669 "product_name": "Malloc disk", 00:05:02.669 "block_size": 512, 00:05:02.669 "num_blocks": 16384, 00:05:02.669 "uuid": "7f3f81d3-993d-438e-ae4e-3ee8a237d6f5", 00:05:02.669 "assigned_rate_limits": { 00:05:02.669 "rw_ios_per_sec": 0, 00:05:02.669 "rw_mbytes_per_sec": 0, 00:05:02.669 "r_mbytes_per_sec": 0, 00:05:02.669 "w_mbytes_per_sec": 0 00:05:02.669 }, 00:05:02.669 "claimed": true, 00:05:02.669 "claim_type": "exclusive_write", 00:05:02.669 "zoned": false, 00:05:02.669 "supported_io_types": { 00:05:02.669 "read": true, 00:05:02.669 "write": true, 00:05:02.669 "unmap": true, 00:05:02.669 "flush": true, 00:05:02.669 "reset": true, 00:05:02.669 "nvme_admin": false, 00:05:02.669 "nvme_io": false, 00:05:02.669 "nvme_io_md": false, 00:05:02.669 "write_zeroes": true, 00:05:02.669 "zcopy": true, 00:05:02.669 "get_zone_info": false, 00:05:02.669 "zone_management": false, 00:05:02.669 "zone_append": false, 00:05:02.669 "compare": false, 00:05:02.669 "compare_and_write": false, 00:05:02.669 "abort": true, 00:05:02.669 "seek_hole": false, 00:05:02.669 "seek_data": false, 00:05:02.669 "copy": true, 00:05:02.669 "nvme_iov_md": false 00:05:02.669 }, 00:05:02.669 "memory_domains": [ 00:05:02.669 { 00:05:02.669 "dma_device_id": "system", 00:05:02.669 "dma_device_type": 1 00:05:02.669 }, 00:05:02.669 { 00:05:02.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.669 "dma_device_type": 2 00:05:02.669 } 00:05:02.669 ], 00:05:02.669 "driver_specific": {} 00:05:02.669 }, 00:05:02.669 { 00:05:02.669 "name": "Passthru0", 00:05:02.669 "aliases": [ 00:05:02.669 "b06e62c8-2373-539a-a345-25fdc03416d7" 00:05:02.669 ], 00:05:02.669 "product_name": "passthru", 00:05:02.669 "block_size": 512, 00:05:02.669 "num_blocks": 16384, 00:05:02.669 "uuid": "b06e62c8-2373-539a-a345-25fdc03416d7", 00:05:02.669 "assigned_rate_limits": { 00:05:02.669 "rw_ios_per_sec": 0, 00:05:02.669 "rw_mbytes_per_sec": 0, 00:05:02.669 "r_mbytes_per_sec": 0, 00:05:02.669 "w_mbytes_per_sec": 0 00:05:02.669 }, 00:05:02.669 "claimed": false, 00:05:02.669 "zoned": false, 00:05:02.669 "supported_io_types": { 00:05:02.669 "read": true, 00:05:02.669 "write": true, 00:05:02.669 "unmap": true, 00:05:02.669 "flush": true, 00:05:02.669 "reset": true, 00:05:02.669 "nvme_admin": false, 00:05:02.669 "nvme_io": false, 00:05:02.669 "nvme_io_md": false, 00:05:02.669 "write_zeroes": true, 00:05:02.669 "zcopy": true, 00:05:02.669 "get_zone_info": false, 00:05:02.669 "zone_management": false, 00:05:02.669 "zone_append": false, 00:05:02.669 "compare": false, 00:05:02.669 "compare_and_write": false, 00:05:02.669 "abort": true, 00:05:02.669 "seek_hole": false, 00:05:02.669 "seek_data": false, 00:05:02.669 "copy": true, 00:05:02.669 "nvme_iov_md": false 00:05:02.669 }, 00:05:02.669 "memory_domains": [ 00:05:02.669 { 00:05:02.669 "dma_device_id": "system", 00:05:02.669 "dma_device_type": 1 00:05:02.669 }, 00:05:02.669 { 00:05:02.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.669 "dma_device_type": 2 00:05:02.669 } 00:05:02.669 ], 00:05:02.669 "driver_specific": { 00:05:02.669 "passthru": { 00:05:02.669 "name": "Passthru0", 00:05:02.669 "base_bdev_name": "Malloc0" 00:05:02.669 } 00:05:02.669 } 00:05:02.669 } 00:05:02.669 ]' 00:05:02.669 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:02.669 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:02.669 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.669 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.669 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.669 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:02.669 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:02.669 21:37:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:02.669 00:05:02.669 real 0m0.307s 00:05:02.669 user 0m0.179s 00:05:02.669 sys 0m0.051s 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.669 21:37:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:02.669 ************************************ 00:05:02.669 END TEST rpc_integrity 00:05:02.669 ************************************ 00:05:02.929 21:37:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:02.929 21:37:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.929 21:37:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.929 21:37:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.929 ************************************ 00:05:02.929 START TEST rpc_plugins 00:05:02.929 ************************************ 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:02.929 21:37:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.929 21:37:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:02.929 21:37:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.929 21:37:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:02.929 { 00:05:02.929 "name": "Malloc1", 00:05:02.929 "aliases": [ 00:05:02.929 "d2386be8-6d9b-478d-a91e-85e8f3854be7" 00:05:02.929 ], 00:05:02.929 "product_name": "Malloc disk", 00:05:02.929 "block_size": 4096, 00:05:02.929 "num_blocks": 256, 00:05:02.929 "uuid": "d2386be8-6d9b-478d-a91e-85e8f3854be7", 00:05:02.929 "assigned_rate_limits": { 00:05:02.929 "rw_ios_per_sec": 0, 00:05:02.929 "rw_mbytes_per_sec": 0, 00:05:02.929 "r_mbytes_per_sec": 0, 00:05:02.929 "w_mbytes_per_sec": 0 00:05:02.929 }, 00:05:02.929 "claimed": false, 00:05:02.929 "zoned": false, 00:05:02.929 "supported_io_types": { 00:05:02.929 "read": true, 00:05:02.929 "write": true, 00:05:02.929 "unmap": true, 00:05:02.929 "flush": true, 00:05:02.929 "reset": true, 00:05:02.929 "nvme_admin": false, 00:05:02.929 "nvme_io": false, 00:05:02.929 "nvme_io_md": false, 00:05:02.929 "write_zeroes": true, 00:05:02.929 "zcopy": true, 00:05:02.929 "get_zone_info": false, 00:05:02.929 "zone_management": false, 00:05:02.929 "zone_append": false, 00:05:02.929 "compare": false, 00:05:02.929 "compare_and_write": false, 00:05:02.929 "abort": true, 00:05:02.929 "seek_hole": false, 00:05:02.929 "seek_data": false, 00:05:02.929 "copy": true, 00:05:02.929 "nvme_iov_md": false 00:05:02.929 }, 00:05:02.929 "memory_domains": [ 00:05:02.929 { 00:05:02.929 "dma_device_id": "system", 00:05:02.929 "dma_device_type": 1 00:05:02.929 }, 00:05:02.929 { 00:05:02.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:02.929 "dma_device_type": 2 00:05:02.929 } 00:05:02.929 ], 00:05:02.929 "driver_specific": {} 00:05:02.929 } 00:05:02.929 ]' 00:05:02.929 21:37:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:02.929 21:37:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:02.929 21:37:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.929 21:37:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:02.929 21:37:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:02.929 21:37:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:02.929 21:37:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:02.929 00:05:02.929 real 0m0.171s 00:05:02.929 user 0m0.099s 00:05:02.929 sys 0m0.026s 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.929 21:37:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:02.929 ************************************ 00:05:02.929 END TEST rpc_plugins 00:05:02.929 ************************************ 00:05:02.929 21:37:26 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:02.929 21:37:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.929 21:37:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.929 21:37:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.929 ************************************ 00:05:02.929 START TEST rpc_trace_cmd_test 00:05:02.929 ************************************ 00:05:02.929 21:37:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:02.929 21:37:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:02.929 21:37:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:02.929 21:37:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.929 21:37:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.188 21:37:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.188 21:37:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:03.188 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68884", 00:05:03.188 "tpoint_group_mask": "0x8", 00:05:03.188 "iscsi_conn": { 00:05:03.189 "mask": "0x2", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "scsi": { 00:05:03.189 "mask": "0x4", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "bdev": { 00:05:03.189 "mask": "0x8", 00:05:03.189 "tpoint_mask": "0xffffffffffffffff" 00:05:03.189 }, 00:05:03.189 "nvmf_rdma": { 00:05:03.189 "mask": "0x10", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "nvmf_tcp": { 00:05:03.189 "mask": "0x20", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "ftl": { 00:05:03.189 "mask": "0x40", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "blobfs": { 00:05:03.189 "mask": "0x80", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "dsa": { 00:05:03.189 "mask": "0x200", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "thread": { 00:05:03.189 "mask": "0x400", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "nvme_pcie": { 00:05:03.189 "mask": "0x800", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "iaa": { 00:05:03.189 "mask": "0x1000", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "nvme_tcp": { 00:05:03.189 "mask": "0x2000", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "bdev_nvme": { 00:05:03.189 "mask": "0x4000", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "sock": { 00:05:03.189 "mask": "0x8000", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "blob": { 00:05:03.189 "mask": "0x10000", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "bdev_raid": { 00:05:03.189 "mask": "0x20000", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 }, 00:05:03.189 "scheduler": { 00:05:03.189 "mask": "0x40000", 00:05:03.189 "tpoint_mask": "0x0" 00:05:03.189 } 00:05:03.189 }' 00:05:03.189 21:37:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:03.189 21:37:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:03.189 21:37:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:03.189 21:37:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:03.189 21:37:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:03.189 21:37:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:03.189 21:37:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:03.189 21:37:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:03.189 21:37:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:03.189 21:37:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:03.189 00:05:03.189 real 0m0.260s 00:05:03.189 user 0m0.211s 00:05:03.189 sys 0m0.041s 00:05:03.189 21:37:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.189 21:37:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:03.189 ************************************ 00:05:03.189 END TEST rpc_trace_cmd_test 00:05:03.189 ************************************ 00:05:03.448 21:37:26 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:03.448 21:37:26 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:03.448 21:37:26 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:03.448 21:37:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.448 21:37:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.448 21:37:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.448 ************************************ 00:05:03.448 START TEST rpc_daemon_integrity 00:05:03.448 ************************************ 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.448 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:03.448 { 00:05:03.449 "name": "Malloc2", 00:05:03.449 "aliases": [ 00:05:03.449 "197a6a0b-ebd7-4f38-bf9d-c325646cffd6" 00:05:03.449 ], 00:05:03.449 "product_name": "Malloc disk", 00:05:03.449 "block_size": 512, 00:05:03.449 "num_blocks": 16384, 00:05:03.449 "uuid": "197a6a0b-ebd7-4f38-bf9d-c325646cffd6", 00:05:03.449 "assigned_rate_limits": { 00:05:03.449 "rw_ios_per_sec": 0, 00:05:03.449 "rw_mbytes_per_sec": 0, 00:05:03.449 "r_mbytes_per_sec": 0, 00:05:03.449 "w_mbytes_per_sec": 0 00:05:03.449 }, 00:05:03.449 "claimed": false, 00:05:03.449 "zoned": false, 00:05:03.449 "supported_io_types": { 00:05:03.449 "read": true, 00:05:03.449 "write": true, 00:05:03.449 "unmap": true, 00:05:03.449 "flush": true, 00:05:03.449 "reset": true, 00:05:03.449 "nvme_admin": false, 00:05:03.449 "nvme_io": false, 00:05:03.449 "nvme_io_md": false, 00:05:03.449 "write_zeroes": true, 00:05:03.449 "zcopy": true, 00:05:03.449 "get_zone_info": false, 00:05:03.449 "zone_management": false, 00:05:03.449 "zone_append": false, 00:05:03.449 "compare": false, 00:05:03.449 "compare_and_write": false, 00:05:03.449 "abort": true, 00:05:03.449 "seek_hole": false, 00:05:03.449 "seek_data": false, 00:05:03.449 "copy": true, 00:05:03.449 "nvme_iov_md": false 00:05:03.449 }, 00:05:03.449 "memory_domains": [ 00:05:03.449 { 00:05:03.449 "dma_device_id": "system", 00:05:03.449 "dma_device_type": 1 00:05:03.449 }, 00:05:03.449 { 00:05:03.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.449 "dma_device_type": 2 00:05:03.449 } 00:05:03.449 ], 00:05:03.449 "driver_specific": {} 00:05:03.449 } 00:05:03.449 ]' 00:05:03.449 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:03.449 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:03.449 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:03.449 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.449 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.449 [2024-11-27 21:37:26.515451] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:03.449 [2024-11-27 21:37:26.515537] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:03.449 [2024-11-27 21:37:26.515565] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:05:03.449 [2024-11-27 21:37:26.515576] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:03.449 [2024-11-27 21:37:26.518342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:03.449 [2024-11-27 21:37:26.518390] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:03.449 Passthru0 00:05:03.449 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.449 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:03.449 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.449 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.449 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.449 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:03.449 { 00:05:03.449 "name": "Malloc2", 00:05:03.449 "aliases": [ 00:05:03.449 "197a6a0b-ebd7-4f38-bf9d-c325646cffd6" 00:05:03.449 ], 00:05:03.449 "product_name": "Malloc disk", 00:05:03.449 "block_size": 512, 00:05:03.449 "num_blocks": 16384, 00:05:03.449 "uuid": "197a6a0b-ebd7-4f38-bf9d-c325646cffd6", 00:05:03.449 "assigned_rate_limits": { 00:05:03.449 "rw_ios_per_sec": 0, 00:05:03.449 "rw_mbytes_per_sec": 0, 00:05:03.449 "r_mbytes_per_sec": 0, 00:05:03.449 "w_mbytes_per_sec": 0 00:05:03.449 }, 00:05:03.449 "claimed": true, 00:05:03.449 "claim_type": "exclusive_write", 00:05:03.449 "zoned": false, 00:05:03.449 "supported_io_types": { 00:05:03.449 "read": true, 00:05:03.449 "write": true, 00:05:03.449 "unmap": true, 00:05:03.449 "flush": true, 00:05:03.449 "reset": true, 00:05:03.449 "nvme_admin": false, 00:05:03.449 "nvme_io": false, 00:05:03.449 "nvme_io_md": false, 00:05:03.449 "write_zeroes": true, 00:05:03.449 "zcopy": true, 00:05:03.449 "get_zone_info": false, 00:05:03.449 "zone_management": false, 00:05:03.449 "zone_append": false, 00:05:03.449 "compare": false, 00:05:03.449 "compare_and_write": false, 00:05:03.449 "abort": true, 00:05:03.449 "seek_hole": false, 00:05:03.449 "seek_data": false, 00:05:03.449 "copy": true, 00:05:03.449 "nvme_iov_md": false 00:05:03.449 }, 00:05:03.449 "memory_domains": [ 00:05:03.449 { 00:05:03.449 "dma_device_id": "system", 00:05:03.449 "dma_device_type": 1 00:05:03.449 }, 00:05:03.449 { 00:05:03.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.449 "dma_device_type": 2 00:05:03.449 } 00:05:03.449 ], 00:05:03.449 "driver_specific": {} 00:05:03.449 }, 00:05:03.449 { 00:05:03.449 "name": "Passthru0", 00:05:03.449 "aliases": [ 00:05:03.449 "886a4481-1a76-55ad-9414-a20a9f3506a3" 00:05:03.449 ], 00:05:03.449 "product_name": "passthru", 00:05:03.449 "block_size": 512, 00:05:03.449 "num_blocks": 16384, 00:05:03.449 "uuid": "886a4481-1a76-55ad-9414-a20a9f3506a3", 00:05:03.449 "assigned_rate_limits": { 00:05:03.449 "rw_ios_per_sec": 0, 00:05:03.449 "rw_mbytes_per_sec": 0, 00:05:03.449 "r_mbytes_per_sec": 0, 00:05:03.449 "w_mbytes_per_sec": 0 00:05:03.449 }, 00:05:03.449 "claimed": false, 00:05:03.449 "zoned": false, 00:05:03.449 "supported_io_types": { 00:05:03.449 "read": true, 00:05:03.449 "write": true, 00:05:03.449 "unmap": true, 00:05:03.449 "flush": true, 00:05:03.449 "reset": true, 00:05:03.449 "nvme_admin": false, 00:05:03.449 "nvme_io": false, 00:05:03.449 "nvme_io_md": false, 00:05:03.449 "write_zeroes": true, 00:05:03.449 "zcopy": true, 00:05:03.449 "get_zone_info": false, 00:05:03.449 "zone_management": false, 00:05:03.449 "zone_append": false, 00:05:03.449 "compare": false, 00:05:03.449 "compare_and_write": false, 00:05:03.449 "abort": true, 00:05:03.449 "seek_hole": false, 00:05:03.449 "seek_data": false, 00:05:03.449 "copy": true, 00:05:03.449 "nvme_iov_md": false 00:05:03.449 }, 00:05:03.449 "memory_domains": [ 00:05:03.449 { 00:05:03.449 "dma_device_id": "system", 00:05:03.449 "dma_device_type": 1 00:05:03.449 }, 00:05:03.449 { 00:05:03.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:03.449 "dma_device_type": 2 00:05:03.449 } 00:05:03.449 ], 00:05:03.449 "driver_specific": { 00:05:03.449 "passthru": { 00:05:03.449 "name": "Passthru0", 00:05:03.449 "base_bdev_name": "Malloc2" 00:05:03.449 } 00:05:03.449 } 00:05:03.449 } 00:05:03.449 ]' 00:05:03.449 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:03.708 00:05:03.708 real 0m0.323s 00:05:03.708 user 0m0.188s 00:05:03.708 sys 0m0.064s 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.708 21:37:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:03.708 ************************************ 00:05:03.708 END TEST rpc_daemon_integrity 00:05:03.708 ************************************ 00:05:03.708 21:37:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:03.708 21:37:26 rpc -- rpc/rpc.sh@84 -- # killprocess 68884 00:05:03.708 21:37:26 rpc -- common/autotest_common.sh@954 -- # '[' -z 68884 ']' 00:05:03.708 21:37:26 rpc -- common/autotest_common.sh@958 -- # kill -0 68884 00:05:03.708 21:37:26 rpc -- common/autotest_common.sh@959 -- # uname 00:05:03.708 21:37:26 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.708 21:37:26 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68884 00:05:03.708 21:37:26 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.708 21:37:26 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.708 killing process with pid 68884 00:05:03.708 21:37:26 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68884' 00:05:03.708 21:37:26 rpc -- common/autotest_common.sh@973 -- # kill 68884 00:05:03.708 21:37:26 rpc -- common/autotest_common.sh@978 -- # wait 68884 00:05:04.275 00:05:04.275 real 0m2.871s 00:05:04.275 user 0m3.438s 00:05:04.275 sys 0m0.877s 00:05:04.275 21:37:27 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.275 21:37:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.275 ************************************ 00:05:04.275 END TEST rpc 00:05:04.275 ************************************ 00:05:04.275 21:37:27 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:04.275 21:37:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.275 21:37:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.275 21:37:27 -- common/autotest_common.sh@10 -- # set +x 00:05:04.275 ************************************ 00:05:04.275 START TEST skip_rpc 00:05:04.275 ************************************ 00:05:04.275 21:37:27 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:04.275 * Looking for test storage... 00:05:04.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:04.275 21:37:27 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.275 21:37:27 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.275 21:37:27 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.535 21:37:27 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:04.535 21:37:27 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.536 21:37:27 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:04.536 21:37:27 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.536 21:37:27 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.536 21:37:27 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.536 21:37:27 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:04.536 21:37:27 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.536 21:37:27 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.536 --rc genhtml_branch_coverage=1 00:05:04.536 --rc genhtml_function_coverage=1 00:05:04.536 --rc genhtml_legend=1 00:05:04.536 --rc geninfo_all_blocks=1 00:05:04.536 --rc geninfo_unexecuted_blocks=1 00:05:04.536 00:05:04.536 ' 00:05:04.536 21:37:27 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.536 --rc genhtml_branch_coverage=1 00:05:04.536 --rc genhtml_function_coverage=1 00:05:04.536 --rc genhtml_legend=1 00:05:04.536 --rc geninfo_all_blocks=1 00:05:04.536 --rc geninfo_unexecuted_blocks=1 00:05:04.536 00:05:04.536 ' 00:05:04.536 21:37:27 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.536 --rc genhtml_branch_coverage=1 00:05:04.536 --rc genhtml_function_coverage=1 00:05:04.536 --rc genhtml_legend=1 00:05:04.536 --rc geninfo_all_blocks=1 00:05:04.536 --rc geninfo_unexecuted_blocks=1 00:05:04.536 00:05:04.536 ' 00:05:04.536 21:37:27 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.536 --rc genhtml_branch_coverage=1 00:05:04.536 --rc genhtml_function_coverage=1 00:05:04.536 --rc genhtml_legend=1 00:05:04.536 --rc geninfo_all_blocks=1 00:05:04.536 --rc geninfo_unexecuted_blocks=1 00:05:04.536 00:05:04.536 ' 00:05:04.536 21:37:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:04.536 21:37:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:04.536 21:37:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:04.536 21:37:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.536 21:37:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.536 21:37:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.536 ************************************ 00:05:04.536 START TEST skip_rpc 00:05:04.536 ************************************ 00:05:04.536 21:37:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:04.536 21:37:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:04.536 21:37:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69091 00:05:04.536 21:37:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.536 21:37:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:04.536 [2024-11-27 21:37:27.539542] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:04.536 [2024-11-27 21:37:27.539675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69091 ] 00:05:04.795 [2024-11-27 21:37:27.698354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.795 [2024-11-27 21:37:27.728031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69091 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 69091 ']' 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 69091 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69091 00:05:10.085 killing process with pid 69091 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69091' 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 69091 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 69091 00:05:10.085 ************************************ 00:05:10.085 END TEST skip_rpc 00:05:10.085 ************************************ 00:05:10.085 00:05:10.085 real 0m5.419s 00:05:10.085 user 0m5.026s 00:05:10.085 sys 0m0.314s 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.085 21:37:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.085 21:37:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:10.085 21:37:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.085 21:37:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.085 21:37:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.085 ************************************ 00:05:10.085 START TEST skip_rpc_with_json 00:05:10.085 ************************************ 00:05:10.085 21:37:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:10.085 21:37:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:10.085 21:37:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69173 00:05:10.085 21:37:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.085 21:37:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.085 21:37:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69173 00:05:10.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.085 21:37:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 69173 ']' 00:05:10.085 21:37:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.085 21:37:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.085 21:37:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.085 21:37:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.085 21:37:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.085 [2024-11-27 21:37:33.033418] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:10.085 [2024-11-27 21:37:33.033668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69173 ] 00:05:10.085 [2024-11-27 21:37:33.187577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.345 [2024-11-27 21:37:33.218889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.914 21:37:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.914 21:37:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:10.914 21:37:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:10.914 21:37:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.914 21:37:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.914 [2024-11-27 21:37:33.891245] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:10.914 request: 00:05:10.914 { 00:05:10.914 "trtype": "tcp", 00:05:10.914 "method": "nvmf_get_transports", 00:05:10.914 "req_id": 1 00:05:10.914 } 00:05:10.914 Got JSON-RPC error response 00:05:10.914 response: 00:05:10.914 { 00:05:10.914 "code": -19, 00:05:10.914 "message": "No such device" 00:05:10.914 } 00:05:10.914 21:37:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:10.914 21:37:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:10.914 21:37:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.914 21:37:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.914 [2024-11-27 21:37:33.907373] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:10.914 21:37:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.914 21:37:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:10.914 21:37:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.914 21:37:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.174 21:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:11.174 21:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.174 { 00:05:11.174 "subsystems": [ 00:05:11.174 { 00:05:11.174 "subsystem": "fsdev", 00:05:11.174 "config": [ 00:05:11.174 { 00:05:11.174 "method": "fsdev_set_opts", 00:05:11.174 "params": { 00:05:11.174 "fsdev_io_pool_size": 65535, 00:05:11.174 "fsdev_io_cache_size": 256 00:05:11.174 } 00:05:11.174 } 00:05:11.174 ] 00:05:11.174 }, 00:05:11.174 { 00:05:11.174 "subsystem": "keyring", 00:05:11.174 "config": [] 00:05:11.174 }, 00:05:11.174 { 00:05:11.174 "subsystem": "iobuf", 00:05:11.174 "config": [ 00:05:11.174 { 00:05:11.174 "method": "iobuf_set_options", 00:05:11.174 "params": { 00:05:11.174 "small_pool_count": 8192, 00:05:11.174 "large_pool_count": 1024, 00:05:11.174 "small_bufsize": 8192, 00:05:11.174 "large_bufsize": 135168, 00:05:11.174 "enable_numa": false 00:05:11.174 } 00:05:11.174 } 00:05:11.175 ] 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "subsystem": "sock", 00:05:11.175 "config": [ 00:05:11.175 { 00:05:11.175 "method": "sock_set_default_impl", 00:05:11.175 "params": { 00:05:11.175 "impl_name": "posix" 00:05:11.175 } 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "method": "sock_impl_set_options", 00:05:11.175 "params": { 00:05:11.175 "impl_name": "ssl", 00:05:11.175 "recv_buf_size": 4096, 00:05:11.175 "send_buf_size": 4096, 00:05:11.175 "enable_recv_pipe": true, 00:05:11.175 "enable_quickack": false, 00:05:11.175 "enable_placement_id": 0, 00:05:11.175 "enable_zerocopy_send_server": true, 00:05:11.175 "enable_zerocopy_send_client": false, 00:05:11.175 "zerocopy_threshold": 0, 00:05:11.175 "tls_version": 0, 00:05:11.175 "enable_ktls": false 00:05:11.175 } 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "method": "sock_impl_set_options", 00:05:11.175 "params": { 00:05:11.175 "impl_name": "posix", 00:05:11.175 "recv_buf_size": 2097152, 00:05:11.175 "send_buf_size": 2097152, 00:05:11.175 "enable_recv_pipe": true, 00:05:11.175 "enable_quickack": false, 00:05:11.175 "enable_placement_id": 0, 00:05:11.175 "enable_zerocopy_send_server": true, 00:05:11.175 "enable_zerocopy_send_client": false, 00:05:11.175 "zerocopy_threshold": 0, 00:05:11.175 "tls_version": 0, 00:05:11.175 "enable_ktls": false 00:05:11.175 } 00:05:11.175 } 00:05:11.175 ] 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "subsystem": "vmd", 00:05:11.175 "config": [] 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "subsystem": "accel", 00:05:11.175 "config": [ 00:05:11.175 { 00:05:11.175 "method": "accel_set_options", 00:05:11.175 "params": { 00:05:11.175 "small_cache_size": 128, 00:05:11.175 "large_cache_size": 16, 00:05:11.175 "task_count": 2048, 00:05:11.175 "sequence_count": 2048, 00:05:11.175 "buf_count": 2048 00:05:11.175 } 00:05:11.175 } 00:05:11.175 ] 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "subsystem": "bdev", 00:05:11.175 "config": [ 00:05:11.175 { 00:05:11.175 "method": "bdev_set_options", 00:05:11.175 "params": { 00:05:11.175 "bdev_io_pool_size": 65535, 00:05:11.175 "bdev_io_cache_size": 256, 00:05:11.175 "bdev_auto_examine": true, 00:05:11.175 "iobuf_small_cache_size": 128, 00:05:11.175 "iobuf_large_cache_size": 16 00:05:11.175 } 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "method": "bdev_raid_set_options", 00:05:11.175 "params": { 00:05:11.175 "process_window_size_kb": 1024, 00:05:11.175 "process_max_bandwidth_mb_sec": 0 00:05:11.175 } 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "method": "bdev_iscsi_set_options", 00:05:11.175 "params": { 00:05:11.175 "timeout_sec": 30 00:05:11.175 } 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "method": "bdev_nvme_set_options", 00:05:11.175 "params": { 00:05:11.175 "action_on_timeout": "none", 00:05:11.175 "timeout_us": 0, 00:05:11.175 "timeout_admin_us": 0, 00:05:11.175 "keep_alive_timeout_ms": 10000, 00:05:11.175 "arbitration_burst": 0, 00:05:11.175 "low_priority_weight": 0, 00:05:11.175 "medium_priority_weight": 0, 00:05:11.175 "high_priority_weight": 0, 00:05:11.175 "nvme_adminq_poll_period_us": 10000, 00:05:11.175 "nvme_ioq_poll_period_us": 0, 00:05:11.175 "io_queue_requests": 0, 00:05:11.175 "delay_cmd_submit": true, 00:05:11.175 "transport_retry_count": 4, 00:05:11.175 "bdev_retry_count": 3, 00:05:11.175 "transport_ack_timeout": 0, 00:05:11.175 "ctrlr_loss_timeout_sec": 0, 00:05:11.175 "reconnect_delay_sec": 0, 00:05:11.175 "fast_io_fail_timeout_sec": 0, 00:05:11.175 "disable_auto_failback": false, 00:05:11.175 "generate_uuids": false, 00:05:11.175 "transport_tos": 0, 00:05:11.175 "nvme_error_stat": false, 00:05:11.175 "rdma_srq_size": 0, 00:05:11.175 "io_path_stat": false, 00:05:11.175 "allow_accel_sequence": false, 00:05:11.175 "rdma_max_cq_size": 0, 00:05:11.175 "rdma_cm_event_timeout_ms": 0, 00:05:11.175 "dhchap_digests": [ 00:05:11.175 "sha256", 00:05:11.175 "sha384", 00:05:11.175 "sha512" 00:05:11.175 ], 00:05:11.175 "dhchap_dhgroups": [ 00:05:11.175 "null", 00:05:11.175 "ffdhe2048", 00:05:11.175 "ffdhe3072", 00:05:11.175 "ffdhe4096", 00:05:11.175 "ffdhe6144", 00:05:11.175 "ffdhe8192" 00:05:11.175 ] 00:05:11.175 } 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "method": "bdev_nvme_set_hotplug", 00:05:11.175 "params": { 00:05:11.175 "period_us": 100000, 00:05:11.175 "enable": false 00:05:11.175 } 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "method": "bdev_wait_for_examine" 00:05:11.175 } 00:05:11.175 ] 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "subsystem": "scsi", 00:05:11.175 "config": null 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "subsystem": "scheduler", 00:05:11.175 "config": [ 00:05:11.175 { 00:05:11.175 "method": "framework_set_scheduler", 00:05:11.175 "params": { 00:05:11.175 "name": "static" 00:05:11.175 } 00:05:11.175 } 00:05:11.175 ] 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "subsystem": "vhost_scsi", 00:05:11.175 "config": [] 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "subsystem": "vhost_blk", 00:05:11.175 "config": [] 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "subsystem": "ublk", 00:05:11.175 "config": [] 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "subsystem": "nbd", 00:05:11.175 "config": [] 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "subsystem": "nvmf", 00:05:11.175 "config": [ 00:05:11.175 { 00:05:11.175 "method": "nvmf_set_config", 00:05:11.175 "params": { 00:05:11.175 "discovery_filter": "match_any", 00:05:11.175 "admin_cmd_passthru": { 00:05:11.175 "identify_ctrlr": false 00:05:11.175 }, 00:05:11.175 "dhchap_digests": [ 00:05:11.175 "sha256", 00:05:11.175 "sha384", 00:05:11.175 "sha512" 00:05:11.175 ], 00:05:11.175 "dhchap_dhgroups": [ 00:05:11.175 "null", 00:05:11.175 "ffdhe2048", 00:05:11.175 "ffdhe3072", 00:05:11.175 "ffdhe4096", 00:05:11.175 "ffdhe6144", 00:05:11.175 "ffdhe8192" 00:05:11.175 ] 00:05:11.175 } 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "method": "nvmf_set_max_subsystems", 00:05:11.175 "params": { 00:05:11.175 "max_subsystems": 1024 00:05:11.175 } 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "method": "nvmf_set_crdt", 00:05:11.175 "params": { 00:05:11.175 "crdt1": 0, 00:05:11.175 "crdt2": 0, 00:05:11.175 "crdt3": 0 00:05:11.175 } 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "method": "nvmf_create_transport", 00:05:11.175 "params": { 00:05:11.175 "trtype": "TCP", 00:05:11.175 "max_queue_depth": 128, 00:05:11.175 "max_io_qpairs_per_ctrlr": 127, 00:05:11.175 "in_capsule_data_size": 4096, 00:05:11.175 "max_io_size": 131072, 00:05:11.175 "io_unit_size": 131072, 00:05:11.175 "max_aq_depth": 128, 00:05:11.175 "num_shared_buffers": 511, 00:05:11.175 "buf_cache_size": 4294967295, 00:05:11.175 "dif_insert_or_strip": false, 00:05:11.175 "zcopy": false, 00:05:11.175 "c2h_success": true, 00:05:11.175 "sock_priority": 0, 00:05:11.175 "abort_timeout_sec": 1, 00:05:11.175 "ack_timeout": 0, 00:05:11.175 "data_wr_pool_size": 0 00:05:11.175 } 00:05:11.175 } 00:05:11.175 ] 00:05:11.175 }, 00:05:11.175 { 00:05:11.175 "subsystem": "iscsi", 00:05:11.175 "config": [ 00:05:11.175 { 00:05:11.175 "method": "iscsi_set_options", 00:05:11.175 "params": { 00:05:11.175 "node_base": "iqn.2016-06.io.spdk", 00:05:11.175 "max_sessions": 128, 00:05:11.175 "max_connections_per_session": 2, 00:05:11.175 "max_queue_depth": 64, 00:05:11.175 "default_time2wait": 2, 00:05:11.175 "default_time2retain": 20, 00:05:11.175 "first_burst_length": 8192, 00:05:11.175 "immediate_data": true, 00:05:11.175 "allow_duplicated_isid": false, 00:05:11.175 "error_recovery_level": 0, 00:05:11.175 "nop_timeout": 60, 00:05:11.175 "nop_in_interval": 30, 00:05:11.175 "disable_chap": false, 00:05:11.175 "require_chap": false, 00:05:11.175 "mutual_chap": false, 00:05:11.175 "chap_group": 0, 00:05:11.175 "max_large_datain_per_connection": 64, 00:05:11.175 "max_r2t_per_connection": 4, 00:05:11.175 "pdu_pool_size": 36864, 00:05:11.175 "immediate_data_pool_size": 16384, 00:05:11.175 "data_out_pool_size": 2048 00:05:11.175 } 00:05:11.175 } 00:05:11.175 ] 00:05:11.175 } 00:05:11.175 ] 00:05:11.175 } 00:05:11.175 21:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:11.175 21:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69173 00:05:11.175 21:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69173 ']' 00:05:11.175 21:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69173 00:05:11.175 21:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:11.175 21:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.176 21:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69173 00:05:11.176 killing process with pid 69173 00:05:11.176 21:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.176 21:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.176 21:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69173' 00:05:11.176 21:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69173 00:05:11.176 21:37:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69173 00:05:11.435 21:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.435 21:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69201 00:05:11.435 21:37:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:16.717 21:37:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69201 00:05:16.717 21:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 69201 ']' 00:05:16.717 21:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 69201 00:05:16.717 21:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:16.717 21:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.717 21:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69201 00:05:16.717 killing process with pid 69201 00:05:16.717 21:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.717 21:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.717 21:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69201' 00:05:16.717 21:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 69201 00:05:16.717 21:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 69201 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:16.977 00:05:16.977 real 0m6.981s 00:05:16.977 user 0m6.570s 00:05:16.977 sys 0m0.728s 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.977 ************************************ 00:05:16.977 END TEST skip_rpc_with_json 00:05:16.977 ************************************ 00:05:16.977 21:37:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:16.977 21:37:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.977 21:37:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.977 21:37:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.977 ************************************ 00:05:16.977 START TEST skip_rpc_with_delay 00:05:16.977 ************************************ 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.977 21:37:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.978 21:37:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:16.978 21:37:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:16.978 [2024-11-27 21:37:40.075291] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:17.238 21:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:17.238 ************************************ 00:05:17.238 END TEST skip_rpc_with_delay 00:05:17.238 ************************************ 00:05:17.238 21:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.238 21:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:17.238 21:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.238 00:05:17.238 real 0m0.162s 00:05:17.238 user 0m0.084s 00:05:17.238 sys 0m0.077s 00:05:17.238 21:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.238 21:37:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:17.238 21:37:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:17.238 21:37:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:17.238 21:37:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:17.238 21:37:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.238 21:37:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.238 21:37:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.238 ************************************ 00:05:17.238 START TEST exit_on_failed_rpc_init 00:05:17.238 ************************************ 00:05:17.238 21:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:17.238 21:37:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69313 00:05:17.238 21:37:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:17.238 21:37:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69313 00:05:17.238 21:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 69313 ']' 00:05:17.238 21:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.238 21:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.238 21:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.238 21:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.238 21:37:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:17.238 [2024-11-27 21:37:40.301866] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:17.238 [2024-11-27 21:37:40.302088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69313 ] 00:05:17.498 [2024-11-27 21:37:40.435704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.498 [2024-11-27 21:37:40.461952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:18.069 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:18.329 [2024-11-27 21:37:41.214165] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:18.329 [2024-11-27 21:37:41.214694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69331 ] 00:05:18.329 [2024-11-27 21:37:41.371153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.329 [2024-11-27 21:37:41.398016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.329 [2024-11-27 21:37:41.398232] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:18.329 [2024-11-27 21:37:41.398296] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:18.329 [2024-11-27 21:37:41.398339] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69313 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 69313 ']' 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 69313 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69313 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69313' 00:05:18.589 killing process with pid 69313 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 69313 00:05:18.589 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 69313 00:05:18.850 00:05:18.850 real 0m1.692s 00:05:18.850 user 0m1.813s 00:05:18.850 sys 0m0.462s 00:05:18.850 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.850 21:37:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.850 ************************************ 00:05:18.850 END TEST exit_on_failed_rpc_init 00:05:18.850 ************************************ 00:05:18.850 21:37:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:18.850 00:05:18.850 real 0m14.751s 00:05:18.850 user 0m13.704s 00:05:18.850 sys 0m1.881s 00:05:18.850 21:37:41 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.850 ************************************ 00:05:18.850 END TEST skip_rpc 00:05:18.850 ************************************ 00:05:18.850 21:37:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.109 21:37:42 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:19.109 21:37:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.109 21:37:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.109 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.109 ************************************ 00:05:19.109 START TEST rpc_client 00:05:19.109 ************************************ 00:05:19.109 21:37:42 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:19.109 * Looking for test storage... 00:05:19.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:19.109 21:37:42 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.109 21:37:42 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.109 21:37:42 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.109 21:37:42 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.109 21:37:42 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.109 21:37:42 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.109 21:37:42 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.110 21:37:42 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:19.370 21:37:42 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:19.370 21:37:42 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.370 21:37:42 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:19.370 21:37:42 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.370 21:37:42 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.370 21:37:42 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.370 21:37:42 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:19.370 21:37:42 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.370 21:37:42 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.370 --rc genhtml_branch_coverage=1 00:05:19.370 --rc genhtml_function_coverage=1 00:05:19.370 --rc genhtml_legend=1 00:05:19.370 --rc geninfo_all_blocks=1 00:05:19.370 --rc geninfo_unexecuted_blocks=1 00:05:19.370 00:05:19.370 ' 00:05:19.370 21:37:42 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.370 --rc genhtml_branch_coverage=1 00:05:19.370 --rc genhtml_function_coverage=1 00:05:19.370 --rc genhtml_legend=1 00:05:19.370 --rc geninfo_all_blocks=1 00:05:19.370 --rc geninfo_unexecuted_blocks=1 00:05:19.370 00:05:19.370 ' 00:05:19.370 21:37:42 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.370 --rc genhtml_branch_coverage=1 00:05:19.370 --rc genhtml_function_coverage=1 00:05:19.370 --rc genhtml_legend=1 00:05:19.370 --rc geninfo_all_blocks=1 00:05:19.370 --rc geninfo_unexecuted_blocks=1 00:05:19.370 00:05:19.370 ' 00:05:19.370 21:37:42 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.370 --rc genhtml_branch_coverage=1 00:05:19.370 --rc genhtml_function_coverage=1 00:05:19.370 --rc genhtml_legend=1 00:05:19.370 --rc geninfo_all_blocks=1 00:05:19.370 --rc geninfo_unexecuted_blocks=1 00:05:19.370 00:05:19.370 ' 00:05:19.370 21:37:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:19.370 OK 00:05:19.370 21:37:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:19.370 00:05:19.370 real 0m0.287s 00:05:19.370 user 0m0.153s 00:05:19.370 sys 0m0.150s 00:05:19.370 21:37:42 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.370 21:37:42 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:19.370 ************************************ 00:05:19.370 END TEST rpc_client 00:05:19.370 ************************************ 00:05:19.370 21:37:42 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:19.370 21:37:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.370 21:37:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.370 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.370 ************************************ 00:05:19.370 START TEST json_config 00:05:19.370 ************************************ 00:05:19.370 21:37:42 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:19.370 21:37:42 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.370 21:37:42 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.370 21:37:42 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.631 21:37:42 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.631 21:37:42 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.631 21:37:42 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.631 21:37:42 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.631 21:37:42 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.631 21:37:42 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.631 21:37:42 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.631 21:37:42 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.631 21:37:42 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.631 21:37:42 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.631 21:37:42 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.631 21:37:42 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.631 21:37:42 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:19.631 21:37:42 json_config -- scripts/common.sh@345 -- # : 1 00:05:19.631 21:37:42 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.631 21:37:42 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.631 21:37:42 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:19.631 21:37:42 json_config -- scripts/common.sh@353 -- # local d=1 00:05:19.631 21:37:42 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.631 21:37:42 json_config -- scripts/common.sh@355 -- # echo 1 00:05:19.631 21:37:42 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.631 21:37:42 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:19.631 21:37:42 json_config -- scripts/common.sh@353 -- # local d=2 00:05:19.631 21:37:42 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.631 21:37:42 json_config -- scripts/common.sh@355 -- # echo 2 00:05:19.631 21:37:42 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.631 21:37:42 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.631 21:37:42 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.631 21:37:42 json_config -- scripts/common.sh@368 -- # return 0 00:05:19.631 21:37:42 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.631 21:37:42 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.631 --rc genhtml_branch_coverage=1 00:05:19.631 --rc genhtml_function_coverage=1 00:05:19.632 --rc genhtml_legend=1 00:05:19.632 --rc geninfo_all_blocks=1 00:05:19.632 --rc geninfo_unexecuted_blocks=1 00:05:19.632 00:05:19.632 ' 00:05:19.632 21:37:42 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.632 --rc genhtml_branch_coverage=1 00:05:19.632 --rc genhtml_function_coverage=1 00:05:19.632 --rc genhtml_legend=1 00:05:19.632 --rc geninfo_all_blocks=1 00:05:19.632 --rc geninfo_unexecuted_blocks=1 00:05:19.632 00:05:19.632 ' 00:05:19.632 21:37:42 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.632 --rc genhtml_branch_coverage=1 00:05:19.632 --rc genhtml_function_coverage=1 00:05:19.632 --rc genhtml_legend=1 00:05:19.632 --rc geninfo_all_blocks=1 00:05:19.632 --rc geninfo_unexecuted_blocks=1 00:05:19.632 00:05:19.632 ' 00:05:19.632 21:37:42 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.632 --rc genhtml_branch_coverage=1 00:05:19.632 --rc genhtml_function_coverage=1 00:05:19.632 --rc genhtml_legend=1 00:05:19.632 --rc geninfo_all_blocks=1 00:05:19.632 --rc geninfo_unexecuted_blocks=1 00:05:19.632 00:05:19.632 ' 00:05:19.632 21:37:42 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:422ad100-7c9f-4e2b-8d8c-77b3989655bc 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=422ad100-7c9f-4e2b-8d8c-77b3989655bc 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.632 21:37:42 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.632 21:37:42 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.632 21:37:42 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.632 21:37:42 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.632 21:37:42 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.632 21:37:42 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.632 21:37:42 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.632 21:37:42 json_config -- paths/export.sh@5 -- # export PATH 00:05:19.632 21:37:42 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@51 -- # : 0 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.632 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.632 21:37:42 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.632 21:37:42 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:19.632 21:37:42 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:19.632 21:37:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:19.632 21:37:42 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:19.632 21:37:42 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:19.632 21:37:42 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:19.632 WARNING: No tests are enabled so not running JSON configuration tests 00:05:19.632 21:37:42 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:19.632 00:05:19.632 real 0m0.194s 00:05:19.632 user 0m0.113s 00:05:19.632 sys 0m0.089s 00:05:19.632 21:37:42 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.632 21:37:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.632 ************************************ 00:05:19.632 END TEST json_config 00:05:19.632 ************************************ 00:05:19.632 21:37:42 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:19.632 21:37:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.632 21:37:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.632 21:37:42 -- common/autotest_common.sh@10 -- # set +x 00:05:19.632 ************************************ 00:05:19.632 START TEST json_config_extra_key 00:05:19.632 ************************************ 00:05:19.632 21:37:42 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:19.632 21:37:42 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:19.632 21:37:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:19.632 21:37:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:19.894 21:37:42 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.894 21:37:42 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:19.894 21:37:42 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.894 21:37:42 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:19.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.894 --rc genhtml_branch_coverage=1 00:05:19.894 --rc genhtml_function_coverage=1 00:05:19.894 --rc genhtml_legend=1 00:05:19.894 --rc geninfo_all_blocks=1 00:05:19.894 --rc geninfo_unexecuted_blocks=1 00:05:19.894 00:05:19.894 ' 00:05:19.894 21:37:42 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:19.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.894 --rc genhtml_branch_coverage=1 00:05:19.894 --rc genhtml_function_coverage=1 00:05:19.894 --rc genhtml_legend=1 00:05:19.894 --rc geninfo_all_blocks=1 00:05:19.894 --rc geninfo_unexecuted_blocks=1 00:05:19.894 00:05:19.894 ' 00:05:19.894 21:37:42 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:19.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.894 --rc genhtml_branch_coverage=1 00:05:19.894 --rc genhtml_function_coverage=1 00:05:19.894 --rc genhtml_legend=1 00:05:19.894 --rc geninfo_all_blocks=1 00:05:19.894 --rc geninfo_unexecuted_blocks=1 00:05:19.894 00:05:19.894 ' 00:05:19.894 21:37:42 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:19.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.894 --rc genhtml_branch_coverage=1 00:05:19.894 --rc genhtml_function_coverage=1 00:05:19.894 --rc genhtml_legend=1 00:05:19.894 --rc geninfo_all_blocks=1 00:05:19.894 --rc geninfo_unexecuted_blocks=1 00:05:19.894 00:05:19.894 ' 00:05:19.894 21:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:19.894 21:37:42 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:19.894 21:37:42 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:19.894 21:37:42 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:19.894 21:37:42 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:19.894 21:37:42 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:19.894 21:37:42 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:19.894 21:37:42 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:19.894 21:37:42 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:19.894 21:37:42 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:19.894 21:37:42 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:19.894 21:37:42 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:19.894 21:37:42 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:422ad100-7c9f-4e2b-8d8c-77b3989655bc 00:05:19.894 21:37:42 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=422ad100-7c9f-4e2b-8d8c-77b3989655bc 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:19.895 21:37:42 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:19.895 21:37:42 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:19.895 21:37:42 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:19.895 21:37:42 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:19.895 21:37:42 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.895 21:37:42 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.895 21:37:42 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.895 21:37:42 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:19.895 21:37:42 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:19.895 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:19.895 21:37:42 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:19.895 21:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:19.895 21:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:19.895 21:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:19.895 21:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:19.895 21:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:19.895 21:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:19.895 21:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:19.895 21:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:19.895 21:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:19.895 21:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:19.895 21:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:19.895 INFO: launching applications... 00:05:19.895 21:37:42 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:19.895 21:37:42 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:19.895 21:37:42 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:19.895 21:37:42 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.895 21:37:42 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.895 21:37:42 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.895 21:37:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.895 21:37:42 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.895 21:37:42 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69519 00:05:19.895 21:37:42 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.895 Waiting for target to run... 00:05:19.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.895 21:37:42 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69519 /var/tmp/spdk_tgt.sock 00:05:19.895 21:37:42 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 69519 ']' 00:05:19.895 21:37:42 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.895 21:37:42 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:19.895 21:37:42 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.895 21:37:42 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.895 21:37:42 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.895 21:37:42 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:19.895 [2024-11-27 21:37:42.944062] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:19.895 [2024-11-27 21:37:42.944216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69519 ] 00:05:20.465 [2024-11-27 21:37:43.313349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.465 [2024-11-27 21:37:43.331095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.725 00:05:20.725 INFO: shutting down applications... 00:05:20.725 21:37:43 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.725 21:37:43 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:20.725 21:37:43 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:20.725 21:37:43 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:20.725 21:37:43 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:20.725 21:37:43 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:20.725 21:37:43 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:20.725 21:37:43 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69519 ]] 00:05:20.725 21:37:43 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69519 00:05:20.725 21:37:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:20.725 21:37:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.725 21:37:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69519 00:05:20.725 21:37:43 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.296 21:37:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.296 21:37:44 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.296 21:37:44 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69519 00:05:21.296 21:37:44 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.296 21:37:44 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:21.296 21:37:44 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.296 21:37:44 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.296 SPDK target shutdown done 00:05:21.296 21:37:44 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:21.296 Success 00:05:21.296 00:05:21.296 real 0m1.653s 00:05:21.296 user 0m1.356s 00:05:21.296 sys 0m0.469s 00:05:21.296 21:37:44 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.296 21:37:44 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:21.296 ************************************ 00:05:21.296 END TEST json_config_extra_key 00:05:21.296 ************************************ 00:05:21.296 21:37:44 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.296 21:37:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.296 21:37:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.296 21:37:44 -- common/autotest_common.sh@10 -- # set +x 00:05:21.296 ************************************ 00:05:21.296 START TEST alias_rpc 00:05:21.296 ************************************ 00:05:21.296 21:37:44 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.557 * Looking for test storage... 00:05:21.557 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.557 21:37:44 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:21.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.557 --rc genhtml_branch_coverage=1 00:05:21.557 --rc genhtml_function_coverage=1 00:05:21.557 --rc genhtml_legend=1 00:05:21.557 --rc geninfo_all_blocks=1 00:05:21.557 --rc geninfo_unexecuted_blocks=1 00:05:21.557 00:05:21.557 ' 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:21.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.557 --rc genhtml_branch_coverage=1 00:05:21.557 --rc genhtml_function_coverage=1 00:05:21.557 --rc genhtml_legend=1 00:05:21.557 --rc geninfo_all_blocks=1 00:05:21.557 --rc geninfo_unexecuted_blocks=1 00:05:21.557 00:05:21.557 ' 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:21.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.557 --rc genhtml_branch_coverage=1 00:05:21.557 --rc genhtml_function_coverage=1 00:05:21.557 --rc genhtml_legend=1 00:05:21.557 --rc geninfo_all_blocks=1 00:05:21.557 --rc geninfo_unexecuted_blocks=1 00:05:21.557 00:05:21.557 ' 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:21.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.557 --rc genhtml_branch_coverage=1 00:05:21.557 --rc genhtml_function_coverage=1 00:05:21.557 --rc genhtml_legend=1 00:05:21.557 --rc geninfo_all_blocks=1 00:05:21.557 --rc geninfo_unexecuted_blocks=1 00:05:21.557 00:05:21.557 ' 00:05:21.557 21:37:44 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:21.557 21:37:44 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69587 00:05:21.557 21:37:44 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:21.557 21:37:44 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69587 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 69587 ']' 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.557 21:37:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.557 [2024-11-27 21:37:44.673214] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:21.557 [2024-11-27 21:37:44.673556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69587 ] 00:05:21.817 [2024-11-27 21:37:44.832766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.817 [2024-11-27 21:37:44.858524] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.385 21:37:45 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.385 21:37:45 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:22.385 21:37:45 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:22.645 21:37:45 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69587 00:05:22.645 21:37:45 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 69587 ']' 00:05:22.645 21:37:45 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 69587 00:05:22.645 21:37:45 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:22.645 21:37:45 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.645 21:37:45 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69587 00:05:22.645 killing process with pid 69587 00:05:22.645 21:37:45 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.645 21:37:45 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.645 21:37:45 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69587' 00:05:22.645 21:37:45 alias_rpc -- common/autotest_common.sh@973 -- # kill 69587 00:05:22.645 21:37:45 alias_rpc -- common/autotest_common.sh@978 -- # wait 69587 00:05:23.215 ************************************ 00:05:23.215 END TEST alias_rpc 00:05:23.215 ************************************ 00:05:23.215 00:05:23.215 real 0m1.718s 00:05:23.215 user 0m1.715s 00:05:23.215 sys 0m0.498s 00:05:23.215 21:37:46 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.215 21:37:46 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.215 21:37:46 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:23.215 21:37:46 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:23.215 21:37:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.215 21:37:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.215 21:37:46 -- common/autotest_common.sh@10 -- # set +x 00:05:23.215 ************************************ 00:05:23.215 START TEST spdkcli_tcp 00:05:23.215 ************************************ 00:05:23.215 21:37:46 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:23.215 * Looking for test storage... 00:05:23.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:23.216 21:37:46 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.216 21:37:46 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.216 21:37:46 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.216 21:37:46 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:23.216 21:37:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.476 21:37:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:23.476 21:37:46 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.476 21:37:46 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:23.476 21:37:46 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:23.476 21:37:46 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.476 21:37:46 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:23.476 21:37:46 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.476 21:37:46 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.476 21:37:46 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.476 21:37:46 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:23.476 21:37:46 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.476 21:37:46 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.476 --rc genhtml_branch_coverage=1 00:05:23.476 --rc genhtml_function_coverage=1 00:05:23.476 --rc genhtml_legend=1 00:05:23.476 --rc geninfo_all_blocks=1 00:05:23.476 --rc geninfo_unexecuted_blocks=1 00:05:23.476 00:05:23.476 ' 00:05:23.476 21:37:46 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.476 --rc genhtml_branch_coverage=1 00:05:23.476 --rc genhtml_function_coverage=1 00:05:23.476 --rc genhtml_legend=1 00:05:23.476 --rc geninfo_all_blocks=1 00:05:23.476 --rc geninfo_unexecuted_blocks=1 00:05:23.476 00:05:23.476 ' 00:05:23.476 21:37:46 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.477 --rc genhtml_branch_coverage=1 00:05:23.477 --rc genhtml_function_coverage=1 00:05:23.477 --rc genhtml_legend=1 00:05:23.477 --rc geninfo_all_blocks=1 00:05:23.477 --rc geninfo_unexecuted_blocks=1 00:05:23.477 00:05:23.477 ' 00:05:23.477 21:37:46 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.477 --rc genhtml_branch_coverage=1 00:05:23.477 --rc genhtml_function_coverage=1 00:05:23.477 --rc genhtml_legend=1 00:05:23.477 --rc geninfo_all_blocks=1 00:05:23.477 --rc geninfo_unexecuted_blocks=1 00:05:23.477 00:05:23.477 ' 00:05:23.477 21:37:46 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:23.477 21:37:46 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:23.477 21:37:46 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:23.477 21:37:46 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:23.477 21:37:46 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:23.477 21:37:46 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:23.477 21:37:46 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:23.477 21:37:46 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:23.477 21:37:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.477 21:37:46 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=69672 00:05:23.477 21:37:46 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:23.477 21:37:46 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 69672 00:05:23.477 21:37:46 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 69672 ']' 00:05:23.477 21:37:46 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.477 21:37:46 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.477 21:37:46 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.477 21:37:46 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.477 21:37:46 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.477 [2024-11-27 21:37:46.451229] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:23.477 [2024-11-27 21:37:46.451451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69672 ] 00:05:23.738 [2024-11-27 21:37:46.609089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:23.738 [2024-11-27 21:37:46.636818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.738 [2024-11-27 21:37:46.636930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.308 21:37:47 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.308 21:37:47 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:24.308 21:37:47 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=69689 00:05:24.308 21:37:47 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:24.308 21:37:47 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:24.569 [ 00:05:24.569 "bdev_malloc_delete", 00:05:24.569 "bdev_malloc_create", 00:05:24.569 "bdev_null_resize", 00:05:24.569 "bdev_null_delete", 00:05:24.569 "bdev_null_create", 00:05:24.569 "bdev_nvme_cuse_unregister", 00:05:24.569 "bdev_nvme_cuse_register", 00:05:24.569 "bdev_opal_new_user", 00:05:24.569 "bdev_opal_set_lock_state", 00:05:24.569 "bdev_opal_delete", 00:05:24.569 "bdev_opal_get_info", 00:05:24.569 "bdev_opal_create", 00:05:24.569 "bdev_nvme_opal_revert", 00:05:24.569 "bdev_nvme_opal_init", 00:05:24.569 "bdev_nvme_send_cmd", 00:05:24.569 "bdev_nvme_set_keys", 00:05:24.569 "bdev_nvme_get_path_iostat", 00:05:24.569 "bdev_nvme_get_mdns_discovery_info", 00:05:24.569 "bdev_nvme_stop_mdns_discovery", 00:05:24.569 "bdev_nvme_start_mdns_discovery", 00:05:24.569 "bdev_nvme_set_multipath_policy", 00:05:24.569 "bdev_nvme_set_preferred_path", 00:05:24.569 "bdev_nvme_get_io_paths", 00:05:24.569 "bdev_nvme_remove_error_injection", 00:05:24.569 "bdev_nvme_add_error_injection", 00:05:24.569 "bdev_nvme_get_discovery_info", 00:05:24.569 "bdev_nvme_stop_discovery", 00:05:24.569 "bdev_nvme_start_discovery", 00:05:24.569 "bdev_nvme_get_controller_health_info", 00:05:24.569 "bdev_nvme_disable_controller", 00:05:24.569 "bdev_nvme_enable_controller", 00:05:24.569 "bdev_nvme_reset_controller", 00:05:24.569 "bdev_nvme_get_transport_statistics", 00:05:24.569 "bdev_nvme_apply_firmware", 00:05:24.569 "bdev_nvme_detach_controller", 00:05:24.569 "bdev_nvme_get_controllers", 00:05:24.569 "bdev_nvme_attach_controller", 00:05:24.569 "bdev_nvme_set_hotplug", 00:05:24.569 "bdev_nvme_set_options", 00:05:24.569 "bdev_passthru_delete", 00:05:24.569 "bdev_passthru_create", 00:05:24.569 "bdev_lvol_set_parent_bdev", 00:05:24.569 "bdev_lvol_set_parent", 00:05:24.569 "bdev_lvol_check_shallow_copy", 00:05:24.569 "bdev_lvol_start_shallow_copy", 00:05:24.569 "bdev_lvol_grow_lvstore", 00:05:24.569 "bdev_lvol_get_lvols", 00:05:24.569 "bdev_lvol_get_lvstores", 00:05:24.569 "bdev_lvol_delete", 00:05:24.569 "bdev_lvol_set_read_only", 00:05:24.569 "bdev_lvol_resize", 00:05:24.569 "bdev_lvol_decouple_parent", 00:05:24.569 "bdev_lvol_inflate", 00:05:24.569 "bdev_lvol_rename", 00:05:24.569 "bdev_lvol_clone_bdev", 00:05:24.569 "bdev_lvol_clone", 00:05:24.569 "bdev_lvol_snapshot", 00:05:24.569 "bdev_lvol_create", 00:05:24.569 "bdev_lvol_delete_lvstore", 00:05:24.569 "bdev_lvol_rename_lvstore", 00:05:24.569 "bdev_lvol_create_lvstore", 00:05:24.569 "bdev_raid_set_options", 00:05:24.569 "bdev_raid_remove_base_bdev", 00:05:24.569 "bdev_raid_add_base_bdev", 00:05:24.569 "bdev_raid_delete", 00:05:24.569 "bdev_raid_create", 00:05:24.569 "bdev_raid_get_bdevs", 00:05:24.569 "bdev_error_inject_error", 00:05:24.569 "bdev_error_delete", 00:05:24.569 "bdev_error_create", 00:05:24.569 "bdev_split_delete", 00:05:24.569 "bdev_split_create", 00:05:24.569 "bdev_delay_delete", 00:05:24.569 "bdev_delay_create", 00:05:24.569 "bdev_delay_update_latency", 00:05:24.569 "bdev_zone_block_delete", 00:05:24.569 "bdev_zone_block_create", 00:05:24.569 "blobfs_create", 00:05:24.569 "blobfs_detect", 00:05:24.569 "blobfs_set_cache_size", 00:05:24.569 "bdev_aio_delete", 00:05:24.569 "bdev_aio_rescan", 00:05:24.569 "bdev_aio_create", 00:05:24.569 "bdev_ftl_set_property", 00:05:24.569 "bdev_ftl_get_properties", 00:05:24.569 "bdev_ftl_get_stats", 00:05:24.569 "bdev_ftl_unmap", 00:05:24.569 "bdev_ftl_unload", 00:05:24.569 "bdev_ftl_delete", 00:05:24.569 "bdev_ftl_load", 00:05:24.569 "bdev_ftl_create", 00:05:24.569 "bdev_virtio_attach_controller", 00:05:24.569 "bdev_virtio_scsi_get_devices", 00:05:24.569 "bdev_virtio_detach_controller", 00:05:24.569 "bdev_virtio_blk_set_hotplug", 00:05:24.569 "bdev_iscsi_delete", 00:05:24.569 "bdev_iscsi_create", 00:05:24.569 "bdev_iscsi_set_options", 00:05:24.569 "accel_error_inject_error", 00:05:24.569 "ioat_scan_accel_module", 00:05:24.569 "dsa_scan_accel_module", 00:05:24.569 "iaa_scan_accel_module", 00:05:24.569 "keyring_file_remove_key", 00:05:24.569 "keyring_file_add_key", 00:05:24.569 "keyring_linux_set_options", 00:05:24.569 "fsdev_aio_delete", 00:05:24.569 "fsdev_aio_create", 00:05:24.569 "iscsi_get_histogram", 00:05:24.569 "iscsi_enable_histogram", 00:05:24.569 "iscsi_set_options", 00:05:24.569 "iscsi_get_auth_groups", 00:05:24.569 "iscsi_auth_group_remove_secret", 00:05:24.569 "iscsi_auth_group_add_secret", 00:05:24.569 "iscsi_delete_auth_group", 00:05:24.569 "iscsi_create_auth_group", 00:05:24.569 "iscsi_set_discovery_auth", 00:05:24.569 "iscsi_get_options", 00:05:24.569 "iscsi_target_node_request_logout", 00:05:24.569 "iscsi_target_node_set_redirect", 00:05:24.569 "iscsi_target_node_set_auth", 00:05:24.569 "iscsi_target_node_add_lun", 00:05:24.569 "iscsi_get_stats", 00:05:24.569 "iscsi_get_connections", 00:05:24.569 "iscsi_portal_group_set_auth", 00:05:24.569 "iscsi_start_portal_group", 00:05:24.569 "iscsi_delete_portal_group", 00:05:24.569 "iscsi_create_portal_group", 00:05:24.569 "iscsi_get_portal_groups", 00:05:24.569 "iscsi_delete_target_node", 00:05:24.569 "iscsi_target_node_remove_pg_ig_maps", 00:05:24.569 "iscsi_target_node_add_pg_ig_maps", 00:05:24.569 "iscsi_create_target_node", 00:05:24.569 "iscsi_get_target_nodes", 00:05:24.569 "iscsi_delete_initiator_group", 00:05:24.569 "iscsi_initiator_group_remove_initiators", 00:05:24.569 "iscsi_initiator_group_add_initiators", 00:05:24.569 "iscsi_create_initiator_group", 00:05:24.569 "iscsi_get_initiator_groups", 00:05:24.569 "nvmf_set_crdt", 00:05:24.569 "nvmf_set_config", 00:05:24.569 "nvmf_set_max_subsystems", 00:05:24.569 "nvmf_stop_mdns_prr", 00:05:24.569 "nvmf_publish_mdns_prr", 00:05:24.569 "nvmf_subsystem_get_listeners", 00:05:24.569 "nvmf_subsystem_get_qpairs", 00:05:24.569 "nvmf_subsystem_get_controllers", 00:05:24.569 "nvmf_get_stats", 00:05:24.569 "nvmf_get_transports", 00:05:24.569 "nvmf_create_transport", 00:05:24.569 "nvmf_get_targets", 00:05:24.569 "nvmf_delete_target", 00:05:24.569 "nvmf_create_target", 00:05:24.569 "nvmf_subsystem_allow_any_host", 00:05:24.569 "nvmf_subsystem_set_keys", 00:05:24.569 "nvmf_subsystem_remove_host", 00:05:24.569 "nvmf_subsystem_add_host", 00:05:24.569 "nvmf_ns_remove_host", 00:05:24.569 "nvmf_ns_add_host", 00:05:24.569 "nvmf_subsystem_remove_ns", 00:05:24.569 "nvmf_subsystem_set_ns_ana_group", 00:05:24.569 "nvmf_subsystem_add_ns", 00:05:24.569 "nvmf_subsystem_listener_set_ana_state", 00:05:24.570 "nvmf_discovery_get_referrals", 00:05:24.570 "nvmf_discovery_remove_referral", 00:05:24.570 "nvmf_discovery_add_referral", 00:05:24.570 "nvmf_subsystem_remove_listener", 00:05:24.570 "nvmf_subsystem_add_listener", 00:05:24.570 "nvmf_delete_subsystem", 00:05:24.570 "nvmf_create_subsystem", 00:05:24.570 "nvmf_get_subsystems", 00:05:24.570 "env_dpdk_get_mem_stats", 00:05:24.570 "nbd_get_disks", 00:05:24.570 "nbd_stop_disk", 00:05:24.570 "nbd_start_disk", 00:05:24.570 "ublk_recover_disk", 00:05:24.570 "ublk_get_disks", 00:05:24.570 "ublk_stop_disk", 00:05:24.570 "ublk_start_disk", 00:05:24.570 "ublk_destroy_target", 00:05:24.570 "ublk_create_target", 00:05:24.570 "virtio_blk_create_transport", 00:05:24.570 "virtio_blk_get_transports", 00:05:24.570 "vhost_controller_set_coalescing", 00:05:24.570 "vhost_get_controllers", 00:05:24.570 "vhost_delete_controller", 00:05:24.570 "vhost_create_blk_controller", 00:05:24.570 "vhost_scsi_controller_remove_target", 00:05:24.570 "vhost_scsi_controller_add_target", 00:05:24.570 "vhost_start_scsi_controller", 00:05:24.570 "vhost_create_scsi_controller", 00:05:24.570 "thread_set_cpumask", 00:05:24.570 "scheduler_set_options", 00:05:24.570 "framework_get_governor", 00:05:24.570 "framework_get_scheduler", 00:05:24.570 "framework_set_scheduler", 00:05:24.570 "framework_get_reactors", 00:05:24.570 "thread_get_io_channels", 00:05:24.570 "thread_get_pollers", 00:05:24.570 "thread_get_stats", 00:05:24.570 "framework_monitor_context_switch", 00:05:24.570 "spdk_kill_instance", 00:05:24.570 "log_enable_timestamps", 00:05:24.570 "log_get_flags", 00:05:24.570 "log_clear_flag", 00:05:24.570 "log_set_flag", 00:05:24.570 "log_get_level", 00:05:24.570 "log_set_level", 00:05:24.570 "log_get_print_level", 00:05:24.570 "log_set_print_level", 00:05:24.570 "framework_enable_cpumask_locks", 00:05:24.570 "framework_disable_cpumask_locks", 00:05:24.570 "framework_wait_init", 00:05:24.570 "framework_start_init", 00:05:24.570 "scsi_get_devices", 00:05:24.570 "bdev_get_histogram", 00:05:24.570 "bdev_enable_histogram", 00:05:24.570 "bdev_set_qos_limit", 00:05:24.570 "bdev_set_qd_sampling_period", 00:05:24.570 "bdev_get_bdevs", 00:05:24.570 "bdev_reset_iostat", 00:05:24.570 "bdev_get_iostat", 00:05:24.570 "bdev_examine", 00:05:24.570 "bdev_wait_for_examine", 00:05:24.570 "bdev_set_options", 00:05:24.570 "accel_get_stats", 00:05:24.570 "accel_set_options", 00:05:24.570 "accel_set_driver", 00:05:24.570 "accel_crypto_key_destroy", 00:05:24.570 "accel_crypto_keys_get", 00:05:24.570 "accel_crypto_key_create", 00:05:24.570 "accel_assign_opc", 00:05:24.570 "accel_get_module_info", 00:05:24.570 "accel_get_opc_assignments", 00:05:24.570 "vmd_rescan", 00:05:24.570 "vmd_remove_device", 00:05:24.570 "vmd_enable", 00:05:24.570 "sock_get_default_impl", 00:05:24.570 "sock_set_default_impl", 00:05:24.570 "sock_impl_set_options", 00:05:24.570 "sock_impl_get_options", 00:05:24.570 "iobuf_get_stats", 00:05:24.570 "iobuf_set_options", 00:05:24.570 "keyring_get_keys", 00:05:24.570 "framework_get_pci_devices", 00:05:24.570 "framework_get_config", 00:05:24.570 "framework_get_subsystems", 00:05:24.570 "fsdev_set_opts", 00:05:24.570 "fsdev_get_opts", 00:05:24.570 "trace_get_info", 00:05:24.570 "trace_get_tpoint_group_mask", 00:05:24.570 "trace_disable_tpoint_group", 00:05:24.570 "trace_enable_tpoint_group", 00:05:24.570 "trace_clear_tpoint_mask", 00:05:24.570 "trace_set_tpoint_mask", 00:05:24.570 "notify_get_notifications", 00:05:24.570 "notify_get_types", 00:05:24.570 "spdk_get_version", 00:05:24.570 "rpc_get_methods" 00:05:24.570 ] 00:05:24.570 21:37:47 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:24.570 21:37:47 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.570 21:37:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:24.570 21:37:47 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:24.570 21:37:47 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 69672 00:05:24.570 21:37:47 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 69672 ']' 00:05:24.570 21:37:47 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 69672 00:05:24.570 21:37:47 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:24.570 21:37:47 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.570 21:37:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69672 00:05:24.570 killing process with pid 69672 00:05:24.570 21:37:47 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.570 21:37:47 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.570 21:37:47 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69672' 00:05:24.570 21:37:47 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 69672 00:05:24.570 21:37:47 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 69672 00:05:24.831 ************************************ 00:05:24.831 END TEST spdkcli_tcp 00:05:24.831 ************************************ 00:05:24.831 00:05:24.831 real 0m1.782s 00:05:24.831 user 0m2.969s 00:05:24.831 sys 0m0.554s 00:05:24.831 21:37:47 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.832 21:37:47 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.092 21:37:47 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:25.092 21:37:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.092 21:37:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.092 21:37:47 -- common/autotest_common.sh@10 -- # set +x 00:05:25.092 ************************************ 00:05:25.093 START TEST dpdk_mem_utility 00:05:25.093 ************************************ 00:05:25.093 21:37:47 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:25.093 * Looking for test storage... 00:05:25.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.093 21:37:48 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.093 --rc genhtml_branch_coverage=1 00:05:25.093 --rc genhtml_function_coverage=1 00:05:25.093 --rc genhtml_legend=1 00:05:25.093 --rc geninfo_all_blocks=1 00:05:25.093 --rc geninfo_unexecuted_blocks=1 00:05:25.093 00:05:25.093 ' 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.093 --rc genhtml_branch_coverage=1 00:05:25.093 --rc genhtml_function_coverage=1 00:05:25.093 --rc genhtml_legend=1 00:05:25.093 --rc geninfo_all_blocks=1 00:05:25.093 --rc geninfo_unexecuted_blocks=1 00:05:25.093 00:05:25.093 ' 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.093 --rc genhtml_branch_coverage=1 00:05:25.093 --rc genhtml_function_coverage=1 00:05:25.093 --rc genhtml_legend=1 00:05:25.093 --rc geninfo_all_blocks=1 00:05:25.093 --rc geninfo_unexecuted_blocks=1 00:05:25.093 00:05:25.093 ' 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.093 --rc genhtml_branch_coverage=1 00:05:25.093 --rc genhtml_function_coverage=1 00:05:25.093 --rc genhtml_legend=1 00:05:25.093 --rc geninfo_all_blocks=1 00:05:25.093 --rc geninfo_unexecuted_blocks=1 00:05:25.093 00:05:25.093 ' 00:05:25.093 21:37:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:25.093 21:37:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=69772 00:05:25.093 21:37:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:25.093 21:37:48 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 69772 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 69772 ']' 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.093 21:37:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.353 [2024-11-27 21:37:48.271982] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:25.353 [2024-11-27 21:37:48.272108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69772 ] 00:05:25.353 [2024-11-27 21:37:48.427989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.353 [2024-11-27 21:37:48.452928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.293 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.293 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:26.294 21:37:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:26.294 21:37:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:26.294 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.294 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.294 { 00:05:26.294 "filename": "/tmp/spdk_mem_dump.txt" 00:05:26.294 } 00:05:26.294 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.294 21:37:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:26.294 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:26.294 1 heaps totaling size 818.000000 MiB 00:05:26.294 size: 818.000000 MiB heap id: 0 00:05:26.294 end heaps---------- 00:05:26.294 9 mempools totaling size 603.782043 MiB 00:05:26.294 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:26.294 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:26.294 size: 100.555481 MiB name: bdev_io_69772 00:05:26.294 size: 50.003479 MiB name: msgpool_69772 00:05:26.294 size: 36.509338 MiB name: fsdev_io_69772 00:05:26.294 size: 21.763794 MiB name: PDU_Pool 00:05:26.294 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:26.294 size: 4.133484 MiB name: evtpool_69772 00:05:26.294 size: 0.026123 MiB name: Session_Pool 00:05:26.294 end mempools------- 00:05:26.294 6 memzones totaling size 4.142822 MiB 00:05:26.294 size: 1.000366 MiB name: RG_ring_0_69772 00:05:26.294 size: 1.000366 MiB name: RG_ring_1_69772 00:05:26.294 size: 1.000366 MiB name: RG_ring_4_69772 00:05:26.294 size: 1.000366 MiB name: RG_ring_5_69772 00:05:26.294 size: 0.125366 MiB name: RG_ring_2_69772 00:05:26.294 size: 0.015991 MiB name: RG_ring_3_69772 00:05:26.294 end memzones------- 00:05:26.294 21:37:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:26.294 heap id: 0 total size: 818.000000 MiB number of busy elements: 318 number of free elements: 15 00:05:26.294 list of free elements. size: 10.802307 MiB 00:05:26.294 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:26.294 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:26.294 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:26.294 element at address: 0x200000400000 with size: 0.993958 MiB 00:05:26.294 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:26.294 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:26.294 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:26.294 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:26.294 element at address: 0x20001ae00000 with size: 0.566956 MiB 00:05:26.294 element at address: 0x20000a600000 with size: 0.488892 MiB 00:05:26.294 element at address: 0x200000c00000 with size: 0.486267 MiB 00:05:26.294 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:26.294 element at address: 0x200003e00000 with size: 0.480286 MiB 00:05:26.294 element at address: 0x200028200000 with size: 0.396301 MiB 00:05:26.294 element at address: 0x200000800000 with size: 0.351746 MiB 00:05:26.294 list of standard malloc elements. size: 199.268799 MiB 00:05:26.294 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:26.294 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:26.294 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:26.294 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:26.294 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:26.294 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:26.294 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:26.294 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:26.294 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:26.294 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000085e580 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087e840 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087e900 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087f080 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087f140 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087f200 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087f380 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087f440 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087f500 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:26.294 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7c7c0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7c880 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7c940 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7ca00 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:05:26.294 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:26.295 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:26.295 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:26.295 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91240 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91300 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae913c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91480 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91540 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91600 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae916c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91780 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91840 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91900 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae919c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91a80 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91b40 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91c00 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91cc0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91d80 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91e40 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91f00 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae91fc0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92080 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92140 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92200 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae922c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92380 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92440 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92500 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae925c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92680 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92740 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92800 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae928c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92980 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92a40 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92b00 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92bc0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92c80 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92d40 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92e00 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92ec0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae92f80 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93040 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93100 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae931c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93280 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93340 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93400 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae934c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93580 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93640 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93700 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae937c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93880 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93940 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93a00 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93ac0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93b80 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93c40 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93d00 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93dc0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93e80 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae93f40 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94000 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae940c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94180 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94240 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94300 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae943c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94480 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94540 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94600 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae946c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94780 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94840 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94900 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae949c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94a80 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94b40 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94c00 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94cc0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94d80 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94e40 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94f00 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae94fc0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae95080 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae95140 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae95200 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae952c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200028265740 with size: 0.000183 MiB 00:05:26.295 element at address: 0x200028265800 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826c400 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826c600 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826c6c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826c780 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826c840 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826c900 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826c9c0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826ca80 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826cb40 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826cc00 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826ccc0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826cd80 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826ce40 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826cf00 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826cfc0 with size: 0.000183 MiB 00:05:26.295 element at address: 0x20002826d080 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826d140 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826d200 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826d2c0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826d380 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826d440 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826d500 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826d5c0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826d680 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826d740 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826d800 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826d8c0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826d980 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826da40 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826db00 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826dbc0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826dc80 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826dd40 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826de00 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826dec0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826df80 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826e040 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826e100 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826e1c0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826e280 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826e340 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826e400 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826e4c0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826e580 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826e640 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826e700 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826e7c0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826e880 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826e940 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826ea00 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826eac0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826eb80 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826ec40 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826ed00 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826edc0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826ee80 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826ef40 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826f000 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826f0c0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826f180 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826f240 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826f300 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826f3c0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826f480 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826f540 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826f600 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826f6c0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826f780 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826f840 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826f900 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826f9c0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826fa80 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826fb40 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826fc00 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826fcc0 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826fd80 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:26.296 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:26.296 list of memzone associated elements. size: 607.928894 MiB 00:05:26.296 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:26.296 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:26.296 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:26.296 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:26.296 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:26.296 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_69772_0 00:05:26.296 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:26.296 associated memzone info: size: 48.002930 MiB name: MP_msgpool_69772_0 00:05:26.296 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:26.296 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_69772_0 00:05:26.296 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:26.296 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:26.296 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:26.296 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:26.296 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:26.296 associated memzone info: size: 3.000122 MiB name: MP_evtpool_69772_0 00:05:26.296 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:26.296 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_69772 00:05:26.296 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:26.296 associated memzone info: size: 1.007996 MiB name: MP_evtpool_69772 00:05:26.296 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:26.296 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:26.296 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:26.296 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:26.296 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:26.296 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:26.296 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:26.296 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:26.296 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:26.296 associated memzone info: size: 1.000366 MiB name: RG_ring_0_69772 00:05:26.296 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:26.296 associated memzone info: size: 1.000366 MiB name: RG_ring_1_69772 00:05:26.296 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:26.296 associated memzone info: size: 1.000366 MiB name: RG_ring_4_69772 00:05:26.296 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:26.296 associated memzone info: size: 1.000366 MiB name: RG_ring_5_69772 00:05:26.296 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:26.296 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_69772 00:05:26.296 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:26.296 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_69772 00:05:26.296 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:26.296 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:26.296 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:26.296 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:26.296 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:26.296 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:26.296 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:26.296 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_69772 00:05:26.296 element at address: 0x20000085e640 with size: 0.125488 MiB 00:05:26.296 associated memzone info: size: 0.125366 MiB name: RG_ring_2_69772 00:05:26.296 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:26.296 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:26.296 element at address: 0x2000282658c0 with size: 0.023743 MiB 00:05:26.296 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:26.296 element at address: 0x20000085a380 with size: 0.016113 MiB 00:05:26.296 associated memzone info: size: 0.015991 MiB name: RG_ring_3_69772 00:05:26.296 element at address: 0x20002826ba00 with size: 0.002441 MiB 00:05:26.296 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:26.296 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:05:26.296 associated memzone info: size: 0.000183 MiB name: MP_msgpool_69772 00:05:26.296 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:26.296 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_69772 00:05:26.296 element at address: 0x20000085a180 with size: 0.000305 MiB 00:05:26.296 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_69772 00:05:26.296 element at address: 0x20002826c4c0 with size: 0.000305 MiB 00:05:26.296 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:26.296 21:37:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:26.296 21:37:49 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 69772 00:05:26.296 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 69772 ']' 00:05:26.296 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 69772 00:05:26.296 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:26.296 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:26.296 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69772 00:05:26.296 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:26.296 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:26.296 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69772' 00:05:26.296 killing process with pid 69772 00:05:26.296 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 69772 00:05:26.296 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 69772 00:05:26.556 00:05:26.556 real 0m1.697s 00:05:26.556 user 0m1.685s 00:05:26.556 sys 0m0.489s 00:05:26.556 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.556 ************************************ 00:05:26.556 END TEST dpdk_mem_utility 00:05:26.556 ************************************ 00:05:26.556 21:37:49 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:26.816 21:37:49 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:26.816 21:37:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.816 21:37:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.816 21:37:49 -- common/autotest_common.sh@10 -- # set +x 00:05:26.816 ************************************ 00:05:26.816 START TEST event 00:05:26.816 ************************************ 00:05:26.816 21:37:49 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:26.816 * Looking for test storage... 00:05:26.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:26.816 21:37:49 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:26.816 21:37:49 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:26.816 21:37:49 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:26.816 21:37:49 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:26.816 21:37:49 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.816 21:37:49 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.816 21:37:49 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.816 21:37:49 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.816 21:37:49 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.816 21:37:49 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.816 21:37:49 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.816 21:37:49 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.816 21:37:49 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.816 21:37:49 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.816 21:37:49 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.816 21:37:49 event -- scripts/common.sh@344 -- # case "$op" in 00:05:26.816 21:37:49 event -- scripts/common.sh@345 -- # : 1 00:05:26.816 21:37:49 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.816 21:37:49 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.816 21:37:49 event -- scripts/common.sh@365 -- # decimal 1 00:05:26.816 21:37:49 event -- scripts/common.sh@353 -- # local d=1 00:05:26.816 21:37:49 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.816 21:37:49 event -- scripts/common.sh@355 -- # echo 1 00:05:26.816 21:37:49 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.816 21:37:49 event -- scripts/common.sh@366 -- # decimal 2 00:05:26.816 21:37:49 event -- scripts/common.sh@353 -- # local d=2 00:05:26.816 21:37:49 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.816 21:37:49 event -- scripts/common.sh@355 -- # echo 2 00:05:26.816 21:37:49 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.816 21:37:49 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.816 21:37:49 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.816 21:37:49 event -- scripts/common.sh@368 -- # return 0 00:05:26.816 21:37:49 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.816 21:37:49 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:26.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.816 --rc genhtml_branch_coverage=1 00:05:26.816 --rc genhtml_function_coverage=1 00:05:26.816 --rc genhtml_legend=1 00:05:26.816 --rc geninfo_all_blocks=1 00:05:26.816 --rc geninfo_unexecuted_blocks=1 00:05:26.816 00:05:26.816 ' 00:05:26.816 21:37:49 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:26.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.817 --rc genhtml_branch_coverage=1 00:05:26.817 --rc genhtml_function_coverage=1 00:05:26.817 --rc genhtml_legend=1 00:05:26.817 --rc geninfo_all_blocks=1 00:05:26.817 --rc geninfo_unexecuted_blocks=1 00:05:26.817 00:05:26.817 ' 00:05:26.817 21:37:49 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:26.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.817 --rc genhtml_branch_coverage=1 00:05:26.817 --rc genhtml_function_coverage=1 00:05:26.817 --rc genhtml_legend=1 00:05:26.817 --rc geninfo_all_blocks=1 00:05:26.817 --rc geninfo_unexecuted_blocks=1 00:05:26.817 00:05:26.817 ' 00:05:26.817 21:37:49 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:26.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.817 --rc genhtml_branch_coverage=1 00:05:26.817 --rc genhtml_function_coverage=1 00:05:26.817 --rc genhtml_legend=1 00:05:26.817 --rc geninfo_all_blocks=1 00:05:26.817 --rc geninfo_unexecuted_blocks=1 00:05:26.817 00:05:26.817 ' 00:05:26.817 21:37:49 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:26.817 21:37:49 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:26.817 21:37:49 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.817 21:37:49 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:26.817 21:37:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.817 21:37:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.079 ************************************ 00:05:27.079 START TEST event_perf 00:05:27.079 ************************************ 00:05:27.079 21:37:49 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:27.079 Running I/O for 1 seconds...[2024-11-27 21:37:49.983342] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:27.079 [2024-11-27 21:37:49.983523] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69847 ] 00:05:27.079 [2024-11-27 21:37:50.140701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:27.079 Running I/O for 1 seconds...[2024-11-27 21:37:50.169755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.079 [2024-11-27 21:37:50.170038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.079 [2024-11-27 21:37:50.169959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.079 [2024-11-27 21:37:50.170187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.488 00:05:28.488 lcore 0: 212343 00:05:28.488 lcore 1: 212341 00:05:28.488 lcore 2: 212341 00:05:28.488 lcore 3: 212342 00:05:28.488 done. 00:05:28.488 ************************************ 00:05:28.488 END TEST event_perf 00:05:28.488 00:05:28.488 real 0m1.292s 00:05:28.488 user 0m4.072s 00:05:28.488 sys 0m0.102s 00:05:28.488 21:37:51 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.488 21:37:51 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.488 ************************************ 00:05:28.488 21:37:51 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:28.488 21:37:51 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:28.488 21:37:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.488 21:37:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.488 ************************************ 00:05:28.488 START TEST event_reactor 00:05:28.488 ************************************ 00:05:28.488 21:37:51 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:28.488 [2024-11-27 21:37:51.337050] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:28.488 [2024-11-27 21:37:51.337217] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69892 ] 00:05:28.488 [2024-11-27 21:37:51.490505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.488 [2024-11-27 21:37:51.515657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.870 test_start 00:05:29.870 oneshot 00:05:29.870 tick 100 00:05:29.870 tick 100 00:05:29.870 tick 250 00:05:29.870 tick 100 00:05:29.870 tick 100 00:05:29.870 tick 100 00:05:29.870 tick 500 00:05:29.870 tick 250 00:05:29.870 tick 100 00:05:29.870 tick 100 00:05:29.870 tick 250 00:05:29.870 tick 100 00:05:29.870 tick 100 00:05:29.870 test_end 00:05:29.870 00:05:29.870 real 0m1.272s 00:05:29.870 user 0m1.096s 00:05:29.870 sys 0m0.069s 00:05:29.870 21:37:52 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.870 21:37:52 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:29.870 ************************************ 00:05:29.871 END TEST event_reactor 00:05:29.871 ************************************ 00:05:29.871 21:37:52 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.871 21:37:52 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:29.871 21:37:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.871 21:37:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.871 ************************************ 00:05:29.871 START TEST event_reactor_perf 00:05:29.871 ************************************ 00:05:29.871 21:37:52 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.871 [2024-11-27 21:37:52.668436] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:29.871 [2024-11-27 21:37:52.668587] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69923 ] 00:05:29.871 [2024-11-27 21:37:52.824870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.871 [2024-11-27 21:37:52.850019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.811 test_start 00:05:30.811 test_end 00:05:30.811 Performance: 393437 events per second 00:05:30.811 ************************************ 00:05:30.811 END TEST event_reactor_perf 00:05:30.811 ************************************ 00:05:30.811 00:05:30.811 real 0m1.280s 00:05:30.811 user 0m1.101s 00:05:30.811 sys 0m0.072s 00:05:30.811 21:37:53 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.811 21:37:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.072 21:37:53 event -- event/event.sh@49 -- # uname -s 00:05:31.072 21:37:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:31.072 21:37:53 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:31.072 21:37:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.072 21:37:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.072 21:37:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.072 ************************************ 00:05:31.072 START TEST event_scheduler 00:05:31.072 ************************************ 00:05:31.072 21:37:53 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:31.072 * Looking for test storage... 00:05:31.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:31.072 21:37:54 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:31.072 21:37:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:31.072 21:37:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:31.072 21:37:54 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.072 21:37:54 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:31.333 21:37:54 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.333 21:37:54 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.333 21:37:54 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.333 21:37:54 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:31.333 21:37:54 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.333 21:37:54 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:31.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.333 --rc genhtml_branch_coverage=1 00:05:31.333 --rc genhtml_function_coverage=1 00:05:31.333 --rc genhtml_legend=1 00:05:31.333 --rc geninfo_all_blocks=1 00:05:31.333 --rc geninfo_unexecuted_blocks=1 00:05:31.333 00:05:31.333 ' 00:05:31.333 21:37:54 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:31.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.333 --rc genhtml_branch_coverage=1 00:05:31.333 --rc genhtml_function_coverage=1 00:05:31.333 --rc genhtml_legend=1 00:05:31.333 --rc geninfo_all_blocks=1 00:05:31.333 --rc geninfo_unexecuted_blocks=1 00:05:31.333 00:05:31.333 ' 00:05:31.333 21:37:54 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:31.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.333 --rc genhtml_branch_coverage=1 00:05:31.333 --rc genhtml_function_coverage=1 00:05:31.333 --rc genhtml_legend=1 00:05:31.333 --rc geninfo_all_blocks=1 00:05:31.333 --rc geninfo_unexecuted_blocks=1 00:05:31.333 00:05:31.333 ' 00:05:31.333 21:37:54 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:31.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.333 --rc genhtml_branch_coverage=1 00:05:31.333 --rc genhtml_function_coverage=1 00:05:31.333 --rc genhtml_legend=1 00:05:31.333 --rc geninfo_all_blocks=1 00:05:31.333 --rc geninfo_unexecuted_blocks=1 00:05:31.333 00:05:31.333 ' 00:05:31.333 21:37:54 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:31.333 21:37:54 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=69998 00:05:31.333 21:37:54 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:31.333 21:37:54 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.333 21:37:54 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 69998 00:05:31.333 21:37:54 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 69998 ']' 00:05:31.333 21:37:54 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.333 21:37:54 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.333 21:37:54 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.333 21:37:54 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.333 21:37:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.333 [2024-11-27 21:37:54.277908] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:31.333 [2024-11-27 21:37:54.278121] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69998 ] 00:05:31.333 [2024-11-27 21:37:54.435042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.594 [2024-11-27 21:37:54.464684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.594 [2024-11-27 21:37:54.464844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.594 [2024-11-27 21:37:54.464916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.594 [2024-11-27 21:37:54.465006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.163 21:37:55 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.163 21:37:55 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:32.163 21:37:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:32.163 21:37:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.163 21:37:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.163 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.163 POWER: Cannot set governor of lcore 0 to userspace 00:05:32.163 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.163 POWER: Cannot set governor of lcore 0 to performance 00:05:32.163 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.163 POWER: Cannot set governor of lcore 0 to userspace 00:05:32.163 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:32.163 POWER: Unable to set Power Management Environment for lcore 0 00:05:32.163 [2024-11-27 21:37:55.117702] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:32.163 [2024-11-27 21:37:55.117739] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:32.163 [2024-11-27 21:37:55.117783] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:32.163 [2024-11-27 21:37:55.117859] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:32.163 [2024-11-27 21:37:55.117892] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:32.163 [2024-11-27 21:37:55.117915] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:32.163 21:37:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.163 21:37:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:32.163 21:37:55 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.163 21:37:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.163 [2024-11-27 21:37:55.193894] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:32.163 21:37:55 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.163 21:37:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:32.163 21:37:55 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.163 21:37:55 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.163 21:37:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.163 ************************************ 00:05:32.163 START TEST scheduler_create_thread 00:05:32.163 ************************************ 00:05:32.163 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:32.163 21:37:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:32.163 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.163 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.163 2 00:05:32.163 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.164 3 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.164 4 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.164 5 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.164 6 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.164 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.423 7 00:05:32.423 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.423 21:37:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:32.423 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.423 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.423 8 00:05:32.423 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.423 21:37:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:32.423 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.423 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.423 9 00:05:32.423 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.423 21:37:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:32.423 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.423 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.683 10 00:05:32.683 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.683 21:37:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:32.683 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.683 21:37:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.064 21:37:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.064 21:37:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:34.064 21:37:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:34.064 21:37:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.064 21:37:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.002 21:37:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.002 21:37:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:35.002 21:37:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.002 21:37:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.570 21:37:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.570 21:37:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:35.570 21:37:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:35.570 21:37:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.570 21:37:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.640 ************************************ 00:05:36.640 END TEST scheduler_create_thread 00:05:36.640 ************************************ 00:05:36.640 21:37:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.640 00:05:36.640 real 0m4.211s 00:05:36.640 user 0m0.029s 00:05:36.640 sys 0m0.004s 00:05:36.640 21:37:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.640 21:37:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.640 21:37:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:36.640 21:37:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 69998 00:05:36.640 21:37:59 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 69998 ']' 00:05:36.640 21:37:59 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 69998 00:05:36.640 21:37:59 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:36.640 21:37:59 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.640 21:37:59 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69998 00:05:36.640 killing process with pid 69998 00:05:36.640 21:37:59 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:36.640 21:37:59 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:36.640 21:37:59 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69998' 00:05:36.640 21:37:59 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 69998 00:05:36.640 21:37:59 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 69998 00:05:36.640 [2024-11-27 21:37:59.697847] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:36.924 00:05:36.924 real 0m5.985s 00:05:36.924 user 0m12.971s 00:05:36.924 sys 0m0.457s 00:05:36.924 21:37:59 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.924 ************************************ 00:05:36.924 END TEST event_scheduler 00:05:36.924 ************************************ 00:05:36.924 21:37:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.924 21:38:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:36.924 21:38:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:36.924 21:38:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.924 21:38:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.924 21:38:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.924 ************************************ 00:05:36.924 START TEST app_repeat 00:05:36.924 ************************************ 00:05:36.924 21:38:00 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:36.924 21:38:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.924 21:38:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.924 21:38:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:36.924 21:38:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.924 21:38:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:36.924 21:38:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:36.924 21:38:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:36.924 21:38:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70105 00:05:36.924 21:38:00 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:37.184 21:38:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.184 21:38:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70105' 00:05:37.184 Process app_repeat pid: 70105 00:05:37.184 21:38:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.184 21:38:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:37.184 spdk_app_start Round 0 00:05:37.184 21:38:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70105 /var/tmp/spdk-nbd.sock 00:05:37.184 21:38:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70105 ']' 00:05:37.184 21:38:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.184 21:38:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.184 21:38:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.184 21:38:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.184 21:38:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.184 [2024-11-27 21:38:00.093404] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:37.184 [2024-11-27 21:38:00.093537] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70105 ] 00:05:37.184 [2024-11-27 21:38:00.247774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.184 [2024-11-27 21:38:00.275140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.184 [2024-11-27 21:38:00.275202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.124 21:38:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.124 21:38:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:38.124 21:38:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.124 Malloc0 00:05:38.124 21:38:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.382 Malloc1 00:05:38.382 21:38:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.382 21:38:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.640 /dev/nbd0 00:05:38.640 21:38:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.640 21:38:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.640 21:38:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:38.640 21:38:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.640 21:38:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.640 21:38:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.640 21:38:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:38.640 21:38:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.640 21:38:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.640 21:38:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.640 21:38:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.640 1+0 records in 00:05:38.640 1+0 records out 00:05:38.640 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239769 s, 17.1 MB/s 00:05:38.640 21:38:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.640 21:38:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.640 21:38:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.640 21:38:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.640 21:38:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.640 21:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.640 21:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.640 21:38:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.900 /dev/nbd1 00:05:38.900 21:38:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.900 21:38:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.900 21:38:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:38.900 21:38:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.900 21:38:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.900 21:38:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.900 21:38:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:38.900 21:38:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.900 21:38:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.900 21:38:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.900 21:38:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.900 1+0 records in 00:05:38.900 1+0 records out 00:05:38.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364917 s, 11.2 MB/s 00:05:38.900 21:38:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.900 21:38:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.900 21:38:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.900 21:38:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.900 21:38:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.900 21:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.900 21:38:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.900 21:38:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.900 21:38:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.900 21:38:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.160 21:38:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.160 { 00:05:39.160 "nbd_device": "/dev/nbd0", 00:05:39.160 "bdev_name": "Malloc0" 00:05:39.160 }, 00:05:39.160 { 00:05:39.160 "nbd_device": "/dev/nbd1", 00:05:39.160 "bdev_name": "Malloc1" 00:05:39.161 } 00:05:39.161 ]' 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.161 { 00:05:39.161 "nbd_device": "/dev/nbd0", 00:05:39.161 "bdev_name": "Malloc0" 00:05:39.161 }, 00:05:39.161 { 00:05:39.161 "nbd_device": "/dev/nbd1", 00:05:39.161 "bdev_name": "Malloc1" 00:05:39.161 } 00:05:39.161 ]' 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.161 /dev/nbd1' 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.161 /dev/nbd1' 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.161 256+0 records in 00:05:39.161 256+0 records out 00:05:39.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141211 s, 74.3 MB/s 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.161 256+0 records in 00:05:39.161 256+0 records out 00:05:39.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0205212 s, 51.1 MB/s 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.161 256+0 records in 00:05:39.161 256+0 records out 00:05:39.161 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226256 s, 46.3 MB/s 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.161 21:38:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.420 21:38:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.420 21:38:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.420 21:38:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.420 21:38:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.420 21:38:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.420 21:38:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.420 21:38:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.420 21:38:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.420 21:38:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.420 21:38:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.680 21:38:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.680 21:38:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.680 21:38:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.680 21:38:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.680 21:38:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.680 21:38:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.680 21:38:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.680 21:38:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.680 21:38:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.680 21:38:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.680 21:38:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.940 21:38:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.940 21:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.940 21:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.940 21:38:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.940 21:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.940 21:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.940 21:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.940 21:38:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.940 21:38:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.940 21:38:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.940 21:38:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.940 21:38:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.940 21:38:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.199 21:38:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.199 [2024-11-27 21:38:03.290765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.199 [2024-11-27 21:38:03.314683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.199 [2024-11-27 21:38:03.314690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.458 [2024-11-27 21:38:03.359027] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:40.458 [2024-11-27 21:38:03.359090] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.752 spdk_app_start Round 1 00:05:43.752 21:38:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.752 21:38:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:43.752 21:38:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70105 /var/tmp/spdk-nbd.sock 00:05:43.752 21:38:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70105 ']' 00:05:43.752 21:38:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.752 21:38:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.752 21:38:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.752 21:38:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.752 21:38:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.752 21:38:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.752 21:38:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:43.752 21:38:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.752 Malloc0 00:05:43.752 21:38:06 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:43.752 Malloc1 00:05:43.752 21:38:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:43.752 21:38:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.012 /dev/nbd0 00:05:44.012 21:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.012 21:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.012 21:38:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:44.012 21:38:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.012 21:38:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.012 21:38:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.012 21:38:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:44.012 21:38:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.012 21:38:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.012 21:38:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.012 21:38:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.012 1+0 records in 00:05:44.012 1+0 records out 00:05:44.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373592 s, 11.0 MB/s 00:05:44.012 21:38:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.012 21:38:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.012 21:38:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.012 21:38:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.012 21:38:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.012 21:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.012 21:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.012 21:38:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.273 /dev/nbd1 00:05:44.273 21:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.273 21:38:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.273 21:38:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:44.273 21:38:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.273 21:38:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.273 21:38:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.273 21:38:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:44.273 21:38:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.273 21:38:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.273 21:38:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.273 21:38:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.273 1+0 records in 00:05:44.273 1+0 records out 00:05:44.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311326 s, 13.2 MB/s 00:05:44.273 21:38:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.273 21:38:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.273 21:38:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.273 21:38:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.273 21:38:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.273 21:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.273 21:38:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.273 21:38:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.273 21:38:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.273 21:38:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:44.532 { 00:05:44.532 "nbd_device": "/dev/nbd0", 00:05:44.532 "bdev_name": "Malloc0" 00:05:44.532 }, 00:05:44.532 { 00:05:44.532 "nbd_device": "/dev/nbd1", 00:05:44.532 "bdev_name": "Malloc1" 00:05:44.532 } 00:05:44.532 ]' 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:44.532 { 00:05:44.532 "nbd_device": "/dev/nbd0", 00:05:44.532 "bdev_name": "Malloc0" 00:05:44.532 }, 00:05:44.532 { 00:05:44.532 "nbd_device": "/dev/nbd1", 00:05:44.532 "bdev_name": "Malloc1" 00:05:44.532 } 00:05:44.532 ]' 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:44.532 /dev/nbd1' 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:44.532 /dev/nbd1' 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:44.532 256+0 records in 00:05:44.532 256+0 records out 00:05:44.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490324 s, 214 MB/s 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:44.532 256+0 records in 00:05:44.532 256+0 records out 00:05:44.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166094 s, 63.1 MB/s 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:44.532 256+0 records in 00:05:44.532 256+0 records out 00:05:44.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223797 s, 46.9 MB/s 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:44.532 21:38:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:44.533 21:38:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:44.533 21:38:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:44.533 21:38:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:44.533 21:38:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.533 21:38:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.533 21:38:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:44.533 21:38:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:44.533 21:38:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.533 21:38:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:44.792 21:38:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:44.792 21:38:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:44.792 21:38:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:44.792 21:38:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:44.792 21:38:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:44.792 21:38:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:44.792 21:38:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:44.792 21:38:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:44.792 21:38:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:44.792 21:38:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.052 21:38:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.052 21:38:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.052 21:38:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.052 21:38:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.052 21:38:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.052 21:38:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.052 21:38:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.052 21:38:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.052 21:38:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.052 21:38:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.052 21:38:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.312 21:38:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:45.312 21:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.312 21:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:45.312 21:38:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:45.312 21:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.312 21:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:45.312 21:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:45.312 21:38:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:45.312 21:38:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:45.312 21:38:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:45.312 21:38:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:45.312 21:38:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:45.312 21:38:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:45.571 21:38:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:45.571 [2024-11-27 21:38:08.647326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:45.571 [2024-11-27 21:38:08.671740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.571 [2024-11-27 21:38:08.671753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.831 [2024-11-27 21:38:08.714480] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:45.831 [2024-11-27 21:38:08.714541] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.120 spdk_app_start Round 2 00:05:49.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.120 21:38:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.120 21:38:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:49.120 21:38:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70105 /var/tmp/spdk-nbd.sock 00:05:49.120 21:38:11 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70105 ']' 00:05:49.120 21:38:11 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.120 21:38:11 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.120 21:38:11 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.120 21:38:11 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.120 21:38:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.120 21:38:11 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.120 21:38:11 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:49.120 21:38:11 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.120 Malloc0 00:05:49.120 21:38:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.120 Malloc1 00:05:49.120 21:38:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.120 21:38:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:49.380 /dev/nbd0 00:05:49.380 21:38:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:49.380 21:38:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:49.380 21:38:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:49.380 21:38:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.380 21:38:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.380 21:38:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.380 21:38:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:49.380 21:38:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.380 21:38:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.380 21:38:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.380 21:38:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.380 1+0 records in 00:05:49.380 1+0 records out 00:05:49.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038941 s, 10.5 MB/s 00:05:49.380 21:38:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.380 21:38:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.380 21:38:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.380 21:38:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.380 21:38:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.380 21:38:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.380 21:38:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.380 21:38:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:49.639 /dev/nbd1 00:05:49.639 21:38:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:49.639 21:38:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:49.639 21:38:12 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:49.639 21:38:12 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:49.639 21:38:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:49.639 21:38:12 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:49.639 21:38:12 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:49.639 21:38:12 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:49.639 21:38:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:49.639 21:38:12 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:49.639 21:38:12 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:49.639 1+0 records in 00:05:49.639 1+0 records out 00:05:49.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000352825 s, 11.6 MB/s 00:05:49.639 21:38:12 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.639 21:38:12 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:49.639 21:38:12 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:49.639 21:38:12 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:49.639 21:38:12 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:49.639 21:38:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:49.639 21:38:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:49.639 21:38:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:49.640 21:38:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.640 21:38:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:49.899 { 00:05:49.899 "nbd_device": "/dev/nbd0", 00:05:49.899 "bdev_name": "Malloc0" 00:05:49.899 }, 00:05:49.899 { 00:05:49.899 "nbd_device": "/dev/nbd1", 00:05:49.899 "bdev_name": "Malloc1" 00:05:49.899 } 00:05:49.899 ]' 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:49.899 { 00:05:49.899 "nbd_device": "/dev/nbd0", 00:05:49.899 "bdev_name": "Malloc0" 00:05:49.899 }, 00:05:49.899 { 00:05:49.899 "nbd_device": "/dev/nbd1", 00:05:49.899 "bdev_name": "Malloc1" 00:05:49.899 } 00:05:49.899 ]' 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:49.899 /dev/nbd1' 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:49.899 /dev/nbd1' 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:49.899 256+0 records in 00:05:49.899 256+0 records out 00:05:49.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137905 s, 76.0 MB/s 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:49.899 256+0 records in 00:05:49.899 256+0 records out 00:05:49.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193357 s, 54.2 MB/s 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:49.899 256+0 records in 00:05:49.899 256+0 records out 00:05:49.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196169 s, 53.5 MB/s 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:49.899 21:38:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:50.160 21:38:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:50.160 21:38:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:50.160 21:38:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:50.160 21:38:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.160 21:38:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.160 21:38:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:50.160 21:38:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.160 21:38:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.160 21:38:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:50.160 21:38:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:50.420 21:38:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:50.420 21:38:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:50.420 21:38:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:50.420 21:38:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:50.420 21:38:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:50.420 21:38:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:50.420 21:38:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:50.420 21:38:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:50.420 21:38:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.420 21:38:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.420 21:38:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:50.680 21:38:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:50.680 21:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:50.680 21:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:50.680 21:38:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:50.680 21:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:50.680 21:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:50.680 21:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:50.680 21:38:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:50.680 21:38:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:50.680 21:38:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:50.680 21:38:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:50.680 21:38:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:50.680 21:38:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:50.939 21:38:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:50.939 [2024-11-27 21:38:13.978522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:50.939 [2024-11-27 21:38:14.003168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.939 [2024-11-27 21:38:14.003169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.939 [2024-11-27 21:38:14.045735] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:50.939 [2024-11-27 21:38:14.045818] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:54.230 21:38:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70105 /var/tmp/spdk-nbd.sock 00:05:54.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.230 21:38:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 70105 ']' 00:05:54.230 21:38:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.230 21:38:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.230 21:38:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.230 21:38:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.230 21:38:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.230 21:38:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:54.230 21:38:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:54.230 21:38:17 event.app_repeat -- event/event.sh@39 -- # killprocess 70105 00:05:54.230 21:38:17 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 70105 ']' 00:05:54.230 21:38:17 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 70105 00:05:54.230 21:38:17 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:54.230 21:38:17 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.230 21:38:17 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70105 00:05:54.230 killing process with pid 70105 00:05:54.230 21:38:17 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.230 21:38:17 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.230 21:38:17 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70105' 00:05:54.230 21:38:17 event.app_repeat -- common/autotest_common.sh@973 -- # kill 70105 00:05:54.230 21:38:17 event.app_repeat -- common/autotest_common.sh@978 -- # wait 70105 00:05:54.230 spdk_app_start is called in Round 0. 00:05:54.230 Shutdown signal received, stop current app iteration 00:05:54.230 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:05:54.230 spdk_app_start is called in Round 1. 00:05:54.230 Shutdown signal received, stop current app iteration 00:05:54.230 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:05:54.230 spdk_app_start is called in Round 2. 00:05:54.230 Shutdown signal received, stop current app iteration 00:05:54.230 Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 reinitialization... 00:05:54.230 spdk_app_start is called in Round 3. 00:05:54.230 Shutdown signal received, stop current app iteration 00:05:54.230 21:38:17 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:54.230 21:38:17 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:54.230 00:05:54.230 real 0m17.230s 00:05:54.230 user 0m38.062s 00:05:54.230 sys 0m2.609s 00:05:54.230 21:38:17 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.230 21:38:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:54.230 ************************************ 00:05:54.230 END TEST app_repeat 00:05:54.230 ************************************ 00:05:54.230 21:38:17 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:54.230 21:38:17 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:54.230 21:38:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.230 21:38:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.230 21:38:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.230 ************************************ 00:05:54.230 START TEST cpu_locks 00:05:54.230 ************************************ 00:05:54.230 21:38:17 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:54.491 * Looking for test storage... 00:05:54.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:54.491 21:38:17 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:54.491 21:38:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:54.491 21:38:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:54.491 21:38:17 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.491 21:38:17 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:54.491 21:38:17 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.491 21:38:17 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.491 --rc genhtml_branch_coverage=1 00:05:54.491 --rc genhtml_function_coverage=1 00:05:54.491 --rc genhtml_legend=1 00:05:54.491 --rc geninfo_all_blocks=1 00:05:54.491 --rc geninfo_unexecuted_blocks=1 00:05:54.491 00:05:54.491 ' 00:05:54.491 21:38:17 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.491 --rc genhtml_branch_coverage=1 00:05:54.491 --rc genhtml_function_coverage=1 00:05:54.491 --rc genhtml_legend=1 00:05:54.491 --rc geninfo_all_blocks=1 00:05:54.491 --rc geninfo_unexecuted_blocks=1 00:05:54.491 00:05:54.491 ' 00:05:54.491 21:38:17 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.491 --rc genhtml_branch_coverage=1 00:05:54.491 --rc genhtml_function_coverage=1 00:05:54.491 --rc genhtml_legend=1 00:05:54.491 --rc geninfo_all_blocks=1 00:05:54.491 --rc geninfo_unexecuted_blocks=1 00:05:54.491 00:05:54.491 ' 00:05:54.491 21:38:17 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:54.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.491 --rc genhtml_branch_coverage=1 00:05:54.491 --rc genhtml_function_coverage=1 00:05:54.491 --rc genhtml_legend=1 00:05:54.491 --rc geninfo_all_blocks=1 00:05:54.491 --rc geninfo_unexecuted_blocks=1 00:05:54.491 00:05:54.491 ' 00:05:54.491 21:38:17 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:54.491 21:38:17 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:54.491 21:38:17 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:54.491 21:38:17 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:54.491 21:38:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.491 21:38:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.491 21:38:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.491 ************************************ 00:05:54.491 START TEST default_locks 00:05:54.491 ************************************ 00:05:54.491 21:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:54.491 21:38:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70531 00:05:54.491 21:38:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.491 21:38:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70531 00:05:54.491 21:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70531 ']' 00:05:54.491 21:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.491 21:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.491 21:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.491 21:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.491 21:38:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.750 [2024-11-27 21:38:17.662207] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:54.750 [2024-11-27 21:38:17.662357] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70531 ] 00:05:54.750 [2024-11-27 21:38:17.816497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.750 [2024-11-27 21:38:17.841309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.689 21:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.689 21:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:55.689 21:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70531 00:05:55.689 21:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70531 00:05:55.689 21:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:55.949 21:38:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70531 00:05:55.949 21:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 70531 ']' 00:05:55.949 21:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 70531 00:05:55.949 21:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:55.949 21:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.949 21:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70531 00:05:55.949 21:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.949 21:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.949 killing process with pid 70531 00:05:55.949 21:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70531' 00:05:55.949 21:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 70531 00:05:55.949 21:38:18 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 70531 00:05:56.518 21:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70531 00:05:56.518 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:56.518 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70531 00:05:56.518 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:56.518 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.518 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:56.518 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.518 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 70531 00:05:56.518 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 70531 ']' 00:05:56.518 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.518 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.518 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.519 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.519 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.519 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70531) - No such process 00:05:56.519 ERROR: process (pid: 70531) is no longer running 00:05:56.519 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.519 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:56.519 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:56.519 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.519 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:56.519 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.519 21:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:56.519 21:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:56.519 21:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:56.519 21:38:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:56.519 00:05:56.519 real 0m1.775s 00:05:56.519 user 0m1.745s 00:05:56.519 sys 0m0.611s 00:05:56.519 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.519 21:38:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.519 ************************************ 00:05:56.519 END TEST default_locks 00:05:56.519 ************************************ 00:05:56.519 21:38:19 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:56.519 21:38:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.519 21:38:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.519 21:38:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.519 ************************************ 00:05:56.519 START TEST default_locks_via_rpc 00:05:56.519 ************************************ 00:05:56.519 21:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:56.519 21:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70579 00:05:56.519 21:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.519 21:38:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70579 00:05:56.519 21:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70579 ']' 00:05:56.519 21:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.519 21:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.519 21:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.519 21:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.519 21:38:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.519 [2024-11-27 21:38:19.514748] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:56.519 [2024-11-27 21:38:19.514887] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70579 ] 00:05:56.778 [2024-11-27 21:38:19.656374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.778 [2024-11-27 21:38:19.680981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70579 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.346 21:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70579 00:05:57.606 21:38:20 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70579 00:05:57.606 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 70579 ']' 00:05:57.606 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 70579 00:05:57.606 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:57.606 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.606 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70579 00:05:57.606 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.606 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.606 killing process with pid 70579 00:05:57.606 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70579' 00:05:57.606 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 70579 00:05:57.606 21:38:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 70579 00:05:58.177 00:05:58.177 real 0m1.615s 00:05:58.177 user 0m1.594s 00:05:58.177 sys 0m0.539s 00:05:58.177 21:38:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.177 21:38:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.177 ************************************ 00:05:58.177 END TEST default_locks_via_rpc 00:05:58.177 ************************************ 00:05:58.177 21:38:21 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:58.177 21:38:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.177 21:38:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.177 21:38:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.177 ************************************ 00:05:58.177 START TEST non_locking_app_on_locked_coremask 00:05:58.177 ************************************ 00:05:58.177 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:58.177 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70631 00:05:58.177 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.177 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70631 /var/tmp/spdk.sock 00:05:58.177 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70631 ']' 00:05:58.177 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.177 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.177 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.177 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.177 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.177 [2024-11-27 21:38:21.193552] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:58.177 [2024-11-27 21:38:21.193693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70631 ] 00:05:58.436 [2024-11-27 21:38:21.346411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.436 [2024-11-27 21:38:21.371089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.004 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.004 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.004 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70647 00:05:59.004 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70647 /var/tmp/spdk2.sock 00:05:59.004 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:59.004 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70647 ']' 00:05:59.004 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.004 21:38:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.004 21:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.004 21:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.004 21:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.004 [2024-11-27 21:38:22.087436] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:05:59.004 [2024-11-27 21:38:22.087931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70647 ] 00:05:59.264 [2024-11-27 21:38:22.237760] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.264 [2024-11-27 21:38:22.237815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.264 [2024-11-27 21:38:22.283611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.835 21:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.835 21:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.835 21:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70631 00:05:59.835 21:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70631 00:05:59.835 21:38:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.101 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70631 00:06:00.101 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70631 ']' 00:06:00.101 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70631 00:06:00.101 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.101 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.101 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70631 00:06:00.370 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.370 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.370 killing process with pid 70631 00:06:00.370 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70631' 00:06:00.370 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70631 00:06:00.370 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70631 00:06:00.938 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70647 00:06:00.938 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70647 ']' 00:06:00.938 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70647 00:06:00.938 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.938 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.938 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70647 00:06:00.938 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.938 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.938 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70647' 00:06:00.938 killing process with pid 70647 00:06:00.938 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70647 00:06:00.938 21:38:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70647 00:06:01.558 00:06:01.558 real 0m3.217s 00:06:01.558 user 0m3.370s 00:06:01.558 sys 0m0.955s 00:06:01.558 21:38:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.558 21:38:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.558 ************************************ 00:06:01.558 END TEST non_locking_app_on_locked_coremask 00:06:01.558 ************************************ 00:06:01.558 21:38:24 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:01.558 21:38:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.558 21:38:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.558 21:38:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.558 ************************************ 00:06:01.558 START TEST locking_app_on_unlocked_coremask 00:06:01.558 ************************************ 00:06:01.558 21:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:01.558 21:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70707 00:06:01.558 21:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:01.558 21:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70707 /var/tmp/spdk.sock 00:06:01.558 21:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70707 ']' 00:06:01.558 21:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.558 21:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.558 21:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.558 21:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.558 21:38:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.558 [2024-11-27 21:38:24.480334] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:01.558 [2024-11-27 21:38:24.480480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70707 ] 00:06:01.558 [2024-11-27 21:38:24.631911] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.558 [2024-11-27 21:38:24.631958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.558 [2024-11-27 21:38:24.657152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.495 21:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.495 21:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:02.495 21:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70723 00:06:02.495 21:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70723 /var/tmp/spdk2.sock 00:06:02.495 21:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.495 21:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70723 ']' 00:06:02.495 21:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.495 21:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.495 21:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.496 21:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.496 21:38:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.496 [2024-11-27 21:38:25.378704] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:02.496 [2024-11-27 21:38:25.379199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70723 ] 00:06:02.496 [2024-11-27 21:38:25.529721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.496 [2024-11-27 21:38:25.579291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.432 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.432 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:03.432 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70723 00:06:03.432 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70723 00:06:03.432 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.692 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70707 00:06:03.692 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70707 ']' 00:06:03.692 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70707 00:06:03.692 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.692 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.692 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70707 00:06:03.692 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.692 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.692 killing process with pid 70707 00:06:03.692 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70707' 00:06:03.692 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70707 00:06:03.692 21:38:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70707 00:06:04.633 21:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70723 00:06:04.633 21:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70723 ']' 00:06:04.633 21:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 70723 00:06:04.633 21:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.633 21:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.633 21:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70723 00:06:04.633 21:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.633 21:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.633 killing process with pid 70723 00:06:04.633 21:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70723' 00:06:04.633 21:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 70723 00:06:04.633 21:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 70723 00:06:04.894 00:06:04.894 real 0m3.411s 00:06:04.894 user 0m3.595s 00:06:04.894 sys 0m1.009s 00:06:04.894 21:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.894 21:38:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.894 ************************************ 00:06:04.894 END TEST locking_app_on_unlocked_coremask 00:06:04.894 ************************************ 00:06:04.894 21:38:27 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:04.894 21:38:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.894 21:38:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.894 21:38:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.894 ************************************ 00:06:04.894 START TEST locking_app_on_locked_coremask 00:06:04.894 ************************************ 00:06:04.894 21:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:04.894 21:38:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=70781 00:06:04.894 21:38:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.894 21:38:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 70781 /var/tmp/spdk.sock 00:06:04.894 21:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70781 ']' 00:06:04.894 21:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.894 21:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.894 21:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.894 21:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.894 21:38:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.894 [2024-11-27 21:38:27.965993] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:04.894 [2024-11-27 21:38:27.966137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70781 ] 00:06:05.155 [2024-11-27 21:38:28.119319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.155 [2024-11-27 21:38:28.143580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=70797 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 70797 /var/tmp/spdk2.sock 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70797 /var/tmp/spdk2.sock 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70797 /var/tmp/spdk2.sock 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 70797 ']' 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.725 21:38:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.985 [2024-11-27 21:38:28.859604] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:05.985 [2024-11-27 21:38:28.859740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70797 ] 00:06:05.985 [2024-11-27 21:38:29.010433] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 70781 has claimed it. 00:06:05.985 [2024-11-27 21:38:29.010505] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.554 ERROR: process (pid: 70797) is no longer running 00:06:06.554 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70797) - No such process 00:06:06.554 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.554 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:06.554 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:06.554 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:06.554 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:06.554 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:06.554 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 70781 00:06:06.554 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70781 00:06:06.554 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.813 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 70781 00:06:06.813 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 70781 ']' 00:06:06.813 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 70781 00:06:06.813 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:06.813 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.813 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70781 00:06:07.073 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.073 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.073 killing process with pid 70781 00:06:07.073 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70781' 00:06:07.073 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 70781 00:06:07.073 21:38:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 70781 00:06:07.334 00:06:07.334 real 0m2.436s 00:06:07.334 user 0m2.625s 00:06:07.334 sys 0m0.704s 00:06:07.334 21:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.334 21:38:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.334 ************************************ 00:06:07.334 END TEST locking_app_on_locked_coremask 00:06:07.334 ************************************ 00:06:07.335 21:38:30 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:07.335 21:38:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.335 21:38:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.335 21:38:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.335 ************************************ 00:06:07.335 START TEST locking_overlapped_coremask 00:06:07.335 ************************************ 00:06:07.335 21:38:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:07.335 21:38:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=70850 00:06:07.335 21:38:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:07.335 21:38:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 70850 /var/tmp/spdk.sock 00:06:07.335 21:38:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 70850 ']' 00:06:07.335 21:38:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.335 21:38:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.335 21:38:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.335 21:38:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.335 21:38:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.594 [2024-11-27 21:38:30.466693] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:07.594 [2024-11-27 21:38:30.467299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70850 ] 00:06:07.594 [2024-11-27 21:38:30.622098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.594 [2024-11-27 21:38:30.649314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.594 [2024-11-27 21:38:30.649407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.594 [2024-11-27 21:38:30.649514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.165 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.165 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:08.165 21:38:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=70868 00:06:08.165 21:38:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 70868 /var/tmp/spdk2.sock 00:06:08.165 21:38:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:08.165 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:08.165 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 70868 /var/tmp/spdk2.sock 00:06:08.165 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:08.165 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.165 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:08.165 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.166 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 70868 /var/tmp/spdk2.sock 00:06:08.166 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 70868 ']' 00:06:08.166 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.166 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.166 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.166 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.166 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.425 [2024-11-27 21:38:31.368097] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:08.425 [2024-11-27 21:38:31.368283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70868 ] 00:06:08.425 [2024-11-27 21:38:31.517038] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70850 has claimed it. 00:06:08.425 [2024-11-27 21:38:31.517108] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.995 ERROR: process (pid: 70868) is no longer running 00:06:08.995 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (70868) - No such process 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 70850 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 70850 ']' 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 70850 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.995 21:38:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70850 00:06:08.995 21:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.995 21:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.995 killing process with pid 70850 00:06:08.995 21:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70850' 00:06:08.995 21:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 70850 00:06:08.995 21:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 70850 00:06:09.565 00:06:09.565 real 0m2.025s 00:06:09.565 user 0m5.500s 00:06:09.565 sys 0m0.474s 00:06:09.565 21:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.565 21:38:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.565 ************************************ 00:06:09.565 END TEST locking_overlapped_coremask 00:06:09.565 ************************************ 00:06:09.565 21:38:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:09.565 21:38:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.565 21:38:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.565 21:38:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.565 ************************************ 00:06:09.565 START TEST locking_overlapped_coremask_via_rpc 00:06:09.565 ************************************ 00:06:09.565 21:38:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:09.565 21:38:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=70910 00:06:09.565 21:38:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 70910 /var/tmp/spdk.sock 00:06:09.565 21:38:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:09.565 21:38:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70910 ']' 00:06:09.565 21:38:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.565 21:38:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.565 21:38:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.565 21:38:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.565 21:38:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.565 [2024-11-27 21:38:32.561332] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:09.565 [2024-11-27 21:38:32.561504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70910 ] 00:06:09.826 [2024-11-27 21:38:32.715961] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:09.826 [2024-11-27 21:38:32.716022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.826 [2024-11-27 21:38:32.742223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.826 [2024-11-27 21:38:32.742314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.826 [2024-11-27 21:38:32.742442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.395 21:38:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.395 21:38:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:10.395 21:38:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=70928 00:06:10.395 21:38:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 70928 /var/tmp/spdk2.sock 00:06:10.395 21:38:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:10.395 21:38:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70928 ']' 00:06:10.395 21:38:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.395 21:38:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.395 21:38:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.395 21:38:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.395 21:38:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.395 [2024-11-27 21:38:33.488377] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:10.395 [2024-11-27 21:38:33.488515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70928 ] 00:06:10.652 [2024-11-27 21:38:33.636967] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.652 [2024-11-27 21:38:33.637014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.652 [2024-11-27 21:38:33.697118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.652 [2024-11-27 21:38:33.697122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.652 [2024-11-27 21:38:33.697231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.217 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.217 [2024-11-27 21:38:34.335003] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 70910 has claimed it. 00:06:11.476 request: 00:06:11.476 { 00:06:11.476 "method": "framework_enable_cpumask_locks", 00:06:11.476 "req_id": 1 00:06:11.476 } 00:06:11.476 Got JSON-RPC error response 00:06:11.476 response: 00:06:11.476 { 00:06:11.476 "code": -32603, 00:06:11.476 "message": "Failed to claim CPU core: 2" 00:06:11.476 } 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 70910 /var/tmp/spdk.sock 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70910 ']' 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 70928 /var/tmp/spdk2.sock 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 70928 ']' 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.476 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.735 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.735 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.735 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:11.735 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.735 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.735 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.735 00:06:11.735 real 0m2.304s 00:06:11.735 user 0m1.066s 00:06:11.735 sys 0m0.154s 00:06:11.736 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.736 21:38:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.736 ************************************ 00:06:11.736 END TEST locking_overlapped_coremask_via_rpc 00:06:11.736 ************************************ 00:06:11.736 21:38:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:11.736 21:38:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70910 ]] 00:06:11.736 21:38:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70910 00:06:11.736 21:38:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 70910 ']' 00:06:11.736 21:38:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 70910 00:06:11.736 21:38:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:11.736 21:38:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.736 21:38:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70910 00:06:11.994 killing process with pid 70910 00:06:11.994 21:38:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.994 21:38:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.994 21:38:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70910' 00:06:11.994 21:38:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 70910 00:06:11.994 21:38:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 70910 00:06:12.253 21:38:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70928 ]] 00:06:12.253 21:38:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70928 00:06:12.253 21:38:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 70928 ']' 00:06:12.253 21:38:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 70928 00:06:12.253 21:38:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:12.253 21:38:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.253 21:38:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70928 00:06:12.253 21:38:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:12.253 21:38:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:12.253 killing process with pid 70928 00:06:12.253 21:38:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70928' 00:06:12.253 21:38:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 70928 00:06:12.253 21:38:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 70928 00:06:12.822 21:38:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.822 21:38:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:12.822 21:38:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 70910 ]] 00:06:12.822 21:38:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 70910 00:06:12.822 21:38:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 70910 ']' 00:06:12.822 21:38:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 70910 00:06:12.822 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (70910) - No such process 00:06:12.822 Process with pid 70910 is not found 00:06:12.822 21:38:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 70910 is not found' 00:06:12.822 21:38:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 70928 ]] 00:06:12.822 21:38:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 70928 00:06:12.822 21:38:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 70928 ']' 00:06:12.822 21:38:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 70928 00:06:12.822 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (70928) - No such process 00:06:12.822 Process with pid 70928 is not found 00:06:12.822 21:38:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 70928 is not found' 00:06:12.822 21:38:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:12.822 ************************************ 00:06:12.822 END TEST cpu_locks 00:06:12.822 ************************************ 00:06:12.822 00:06:12.822 real 0m18.323s 00:06:12.822 user 0m30.916s 00:06:12.822 sys 0m5.527s 00:06:12.822 21:38:35 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.822 21:38:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.822 00:06:12.822 real 0m45.983s 00:06:12.822 user 1m28.456s 00:06:12.822 sys 0m9.206s 00:06:12.822 21:38:35 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.822 21:38:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.822 ************************************ 00:06:12.822 END TEST event 00:06:12.822 ************************************ 00:06:12.822 21:38:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:12.822 21:38:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.822 21:38:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.822 21:38:35 -- common/autotest_common.sh@10 -- # set +x 00:06:12.822 ************************************ 00:06:12.822 START TEST thread 00:06:12.822 ************************************ 00:06:12.822 21:38:35 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:12.822 * Looking for test storage... 00:06:12.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:12.822 21:38:35 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:12.822 21:38:35 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:12.822 21:38:35 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:13.082 21:38:35 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:13.082 21:38:35 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.082 21:38:35 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.082 21:38:35 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.082 21:38:35 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.082 21:38:35 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.082 21:38:35 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.082 21:38:35 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.082 21:38:35 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.082 21:38:35 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.082 21:38:35 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.082 21:38:35 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.082 21:38:35 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:13.082 21:38:35 thread -- scripts/common.sh@345 -- # : 1 00:06:13.082 21:38:35 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.082 21:38:35 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.082 21:38:35 thread -- scripts/common.sh@365 -- # decimal 1 00:06:13.082 21:38:35 thread -- scripts/common.sh@353 -- # local d=1 00:06:13.082 21:38:35 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.082 21:38:35 thread -- scripts/common.sh@355 -- # echo 1 00:06:13.082 21:38:35 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.082 21:38:35 thread -- scripts/common.sh@366 -- # decimal 2 00:06:13.082 21:38:35 thread -- scripts/common.sh@353 -- # local d=2 00:06:13.082 21:38:35 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.082 21:38:35 thread -- scripts/common.sh@355 -- # echo 2 00:06:13.082 21:38:35 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.082 21:38:35 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.082 21:38:35 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.082 21:38:35 thread -- scripts/common.sh@368 -- # return 0 00:06:13.083 21:38:35 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.083 21:38:35 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:13.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.083 --rc genhtml_branch_coverage=1 00:06:13.083 --rc genhtml_function_coverage=1 00:06:13.083 --rc genhtml_legend=1 00:06:13.083 --rc geninfo_all_blocks=1 00:06:13.083 --rc geninfo_unexecuted_blocks=1 00:06:13.083 00:06:13.083 ' 00:06:13.083 21:38:35 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:13.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.083 --rc genhtml_branch_coverage=1 00:06:13.083 --rc genhtml_function_coverage=1 00:06:13.083 --rc genhtml_legend=1 00:06:13.083 --rc geninfo_all_blocks=1 00:06:13.083 --rc geninfo_unexecuted_blocks=1 00:06:13.083 00:06:13.083 ' 00:06:13.083 21:38:35 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:13.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.083 --rc genhtml_branch_coverage=1 00:06:13.083 --rc genhtml_function_coverage=1 00:06:13.083 --rc genhtml_legend=1 00:06:13.083 --rc geninfo_all_blocks=1 00:06:13.083 --rc geninfo_unexecuted_blocks=1 00:06:13.083 00:06:13.083 ' 00:06:13.083 21:38:35 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:13.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.083 --rc genhtml_branch_coverage=1 00:06:13.083 --rc genhtml_function_coverage=1 00:06:13.083 --rc genhtml_legend=1 00:06:13.083 --rc geninfo_all_blocks=1 00:06:13.083 --rc geninfo_unexecuted_blocks=1 00:06:13.083 00:06:13.083 ' 00:06:13.083 21:38:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.083 21:38:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:13.083 21:38:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.083 21:38:36 thread -- common/autotest_common.sh@10 -- # set +x 00:06:13.083 ************************************ 00:06:13.083 START TEST thread_poller_perf 00:06:13.083 ************************************ 00:06:13.083 21:38:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:13.083 [2024-11-27 21:38:36.056417] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:13.083 [2024-11-27 21:38:36.056535] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71055 ] 00:06:13.347 [2024-11-27 21:38:36.210168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.347 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:13.347 [2024-11-27 21:38:36.234487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.288 [2024-11-27T21:38:37.409Z] ====================================== 00:06:14.288 [2024-11-27T21:38:37.409Z] busy:2298603126 (cyc) 00:06:14.288 [2024-11-27T21:38:37.409Z] total_run_count: 422000 00:06:14.288 [2024-11-27T21:38:37.409Z] tsc_hz: 2290000000 (cyc) 00:06:14.288 [2024-11-27T21:38:37.409Z] ====================================== 00:06:14.288 [2024-11-27T21:38:37.409Z] poller_cost: 5446 (cyc), 2378 (nsec) 00:06:14.288 00:06:14.288 real 0m1.284s 00:06:14.288 user 0m1.104s 00:06:14.288 sys 0m0.075s 00:06:14.288 21:38:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.288 21:38:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.288 ************************************ 00:06:14.288 END TEST thread_poller_perf 00:06:14.288 ************************************ 00:06:14.288 21:38:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.288 21:38:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:14.288 21:38:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.288 21:38:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:14.288 ************************************ 00:06:14.288 START TEST thread_poller_perf 00:06:14.288 ************************************ 00:06:14.288 21:38:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:14.288 [2024-11-27 21:38:37.407044] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:14.288 [2024-11-27 21:38:37.407199] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71092 ] 00:06:14.549 [2024-11-27 21:38:37.563329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.549 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:14.549 [2024-11-27 21:38:37.587563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.931 [2024-11-27T21:38:39.052Z] ====================================== 00:06:15.931 [2024-11-27T21:38:39.052Z] busy:2293279400 (cyc) 00:06:15.931 [2024-11-27T21:38:39.052Z] total_run_count: 5553000 00:06:15.931 [2024-11-27T21:38:39.052Z] tsc_hz: 2290000000 (cyc) 00:06:15.931 [2024-11-27T21:38:39.052Z] ====================================== 00:06:15.931 [2024-11-27T21:38:39.052Z] poller_cost: 412 (cyc), 179 (nsec) 00:06:15.931 00:06:15.931 real 0m1.280s 00:06:15.931 user 0m1.096s 00:06:15.931 sys 0m0.079s 00:06:15.931 21:38:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.931 21:38:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:15.931 ************************************ 00:06:15.931 END TEST thread_poller_perf 00:06:15.931 ************************************ 00:06:15.931 21:38:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:15.931 00:06:15.931 real 0m2.926s 00:06:15.931 user 0m2.378s 00:06:15.931 sys 0m0.355s 00:06:15.931 21:38:38 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.931 21:38:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.931 ************************************ 00:06:15.931 END TEST thread 00:06:15.931 ************************************ 00:06:15.931 21:38:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:15.931 21:38:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:15.931 21:38:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.931 21:38:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.931 21:38:38 -- common/autotest_common.sh@10 -- # set +x 00:06:15.931 ************************************ 00:06:15.931 START TEST app_cmdline 00:06:15.931 ************************************ 00:06:15.931 21:38:38 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:15.931 * Looking for test storage... 00:06:15.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:15.931 21:38:38 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:15.931 21:38:38 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:15.931 21:38:38 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:15.931 21:38:38 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:15.931 21:38:38 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:15.932 21:38:38 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.932 21:38:38 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:15.932 21:38:38 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.932 21:38:38 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:15.932 21:38:38 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:15.932 21:38:38 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.932 21:38:38 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:15.932 21:38:38 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.932 21:38:38 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.932 21:38:38 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.932 21:38:38 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:15.932 21:38:38 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.932 21:38:38 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:15.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.932 --rc genhtml_branch_coverage=1 00:06:15.932 --rc genhtml_function_coverage=1 00:06:15.932 --rc genhtml_legend=1 00:06:15.932 --rc geninfo_all_blocks=1 00:06:15.932 --rc geninfo_unexecuted_blocks=1 00:06:15.932 00:06:15.932 ' 00:06:15.932 21:38:38 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:15.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.932 --rc genhtml_branch_coverage=1 00:06:15.932 --rc genhtml_function_coverage=1 00:06:15.932 --rc genhtml_legend=1 00:06:15.932 --rc geninfo_all_blocks=1 00:06:15.932 --rc geninfo_unexecuted_blocks=1 00:06:15.932 00:06:15.932 ' 00:06:15.932 21:38:38 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:15.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.932 --rc genhtml_branch_coverage=1 00:06:15.932 --rc genhtml_function_coverage=1 00:06:15.932 --rc genhtml_legend=1 00:06:15.932 --rc geninfo_all_blocks=1 00:06:15.932 --rc geninfo_unexecuted_blocks=1 00:06:15.932 00:06:15.932 ' 00:06:15.932 21:38:38 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:15.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.932 --rc genhtml_branch_coverage=1 00:06:15.932 --rc genhtml_function_coverage=1 00:06:15.932 --rc genhtml_legend=1 00:06:15.932 --rc geninfo_all_blocks=1 00:06:15.932 --rc geninfo_unexecuted_blocks=1 00:06:15.932 00:06:15.932 ' 00:06:15.932 21:38:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:15.932 21:38:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71175 00:06:15.932 21:38:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:15.932 21:38:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71175 00:06:15.932 21:38:38 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 71175 ']' 00:06:15.932 21:38:38 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.932 21:38:38 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.932 21:38:38 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.932 21:38:38 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.932 21:38:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:16.194 [2024-11-27 21:38:39.070746] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:16.194 [2024-11-27 21:38:39.070884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71175 ] 00:06:16.194 [2024-11-27 21:38:39.224189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.194 [2024-11-27 21:38:39.248499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.765 21:38:39 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.765 21:38:39 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:16.765 21:38:39 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:17.025 { 00:06:17.025 "version": "SPDK v25.01-pre git sha1 35cd3e84d", 00:06:17.025 "fields": { 00:06:17.025 "major": 25, 00:06:17.025 "minor": 1, 00:06:17.025 "patch": 0, 00:06:17.025 "suffix": "-pre", 00:06:17.025 "commit": "35cd3e84d" 00:06:17.025 } 00:06:17.025 } 00:06:17.025 21:38:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:17.025 21:38:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:17.025 21:38:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:17.025 21:38:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:17.025 21:38:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:17.025 21:38:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.025 21:38:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.025 21:38:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:17.025 21:38:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:17.025 21:38:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.025 21:38:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:17.025 21:38:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:17.025 21:38:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.025 21:38:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:17.025 21:38:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.025 21:38:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.025 21:38:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.025 21:38:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.025 21:38:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.025 21:38:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.025 21:38:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:17.025 21:38:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:17.025 21:38:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:17.025 21:38:40 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:17.285 request: 00:06:17.285 { 00:06:17.285 "method": "env_dpdk_get_mem_stats", 00:06:17.285 "req_id": 1 00:06:17.285 } 00:06:17.285 Got JSON-RPC error response 00:06:17.285 response: 00:06:17.285 { 00:06:17.285 "code": -32601, 00:06:17.285 "message": "Method not found" 00:06:17.285 } 00:06:17.285 21:38:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:17.285 21:38:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:17.285 21:38:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:17.285 21:38:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:17.285 21:38:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71175 00:06:17.285 21:38:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 71175 ']' 00:06:17.285 21:38:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 71175 00:06:17.285 21:38:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:17.285 21:38:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.285 21:38:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71175 00:06:17.285 21:38:40 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.285 21:38:40 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.285 killing process with pid 71175 00:06:17.285 21:38:40 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71175' 00:06:17.285 21:38:40 app_cmdline -- common/autotest_common.sh@973 -- # kill 71175 00:06:17.285 21:38:40 app_cmdline -- common/autotest_common.sh@978 -- # wait 71175 00:06:17.854 00:06:17.854 real 0m1.920s 00:06:17.854 user 0m2.132s 00:06:17.854 sys 0m0.512s 00:06:17.854 21:38:40 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.854 21:38:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.854 ************************************ 00:06:17.854 END TEST app_cmdline 00:06:17.854 ************************************ 00:06:17.854 21:38:40 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:17.854 21:38:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.854 21:38:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.854 21:38:40 -- common/autotest_common.sh@10 -- # set +x 00:06:17.854 ************************************ 00:06:17.854 START TEST version 00:06:17.854 ************************************ 00:06:17.854 21:38:40 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:17.854 * Looking for test storage... 00:06:17.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:17.854 21:38:40 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.854 21:38:40 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.854 21:38:40 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.854 21:38:40 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.854 21:38:40 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.854 21:38:40 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.854 21:38:40 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.854 21:38:40 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.854 21:38:40 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.854 21:38:40 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.854 21:38:40 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.854 21:38:40 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.854 21:38:40 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.854 21:38:40 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.854 21:38:40 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.854 21:38:40 version -- scripts/common.sh@344 -- # case "$op" in 00:06:17.854 21:38:40 version -- scripts/common.sh@345 -- # : 1 00:06:17.854 21:38:40 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.854 21:38:40 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.854 21:38:40 version -- scripts/common.sh@365 -- # decimal 1 00:06:17.854 21:38:40 version -- scripts/common.sh@353 -- # local d=1 00:06:17.854 21:38:40 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.854 21:38:40 version -- scripts/common.sh@355 -- # echo 1 00:06:17.854 21:38:40 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.854 21:38:40 version -- scripts/common.sh@366 -- # decimal 2 00:06:17.854 21:38:40 version -- scripts/common.sh@353 -- # local d=2 00:06:17.854 21:38:40 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.854 21:38:40 version -- scripts/common.sh@355 -- # echo 2 00:06:17.854 21:38:40 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.854 21:38:40 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.854 21:38:40 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.854 21:38:40 version -- scripts/common.sh@368 -- # return 0 00:06:17.854 21:38:40 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.854 21:38:40 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.854 --rc genhtml_branch_coverage=1 00:06:17.854 --rc genhtml_function_coverage=1 00:06:17.854 --rc genhtml_legend=1 00:06:17.854 --rc geninfo_all_blocks=1 00:06:17.854 --rc geninfo_unexecuted_blocks=1 00:06:17.854 00:06:17.854 ' 00:06:17.854 21:38:40 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.854 --rc genhtml_branch_coverage=1 00:06:17.854 --rc genhtml_function_coverage=1 00:06:17.854 --rc genhtml_legend=1 00:06:17.854 --rc geninfo_all_blocks=1 00:06:17.854 --rc geninfo_unexecuted_blocks=1 00:06:17.854 00:06:17.854 ' 00:06:17.854 21:38:40 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.854 --rc genhtml_branch_coverage=1 00:06:17.854 --rc genhtml_function_coverage=1 00:06:17.854 --rc genhtml_legend=1 00:06:17.854 --rc geninfo_all_blocks=1 00:06:17.854 --rc geninfo_unexecuted_blocks=1 00:06:17.854 00:06:17.855 ' 00:06:17.855 21:38:40 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.855 --rc genhtml_branch_coverage=1 00:06:17.855 --rc genhtml_function_coverage=1 00:06:17.855 --rc genhtml_legend=1 00:06:17.855 --rc geninfo_all_blocks=1 00:06:17.855 --rc geninfo_unexecuted_blocks=1 00:06:17.855 00:06:17.855 ' 00:06:17.855 21:38:40 version -- app/version.sh@17 -- # get_header_version major 00:06:17.855 21:38:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.855 21:38:40 version -- app/version.sh@14 -- # cut -f2 00:06:17.855 21:38:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.855 21:38:40 version -- app/version.sh@17 -- # major=25 00:06:17.855 21:38:40 version -- app/version.sh@18 -- # get_header_version minor 00:06:17.855 21:38:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:17.855 21:38:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:17.855 21:38:40 version -- app/version.sh@14 -- # cut -f2 00:06:18.132 21:38:40 version -- app/version.sh@18 -- # minor=1 00:06:18.132 21:38:40 version -- app/version.sh@19 -- # get_header_version patch 00:06:18.132 21:38:40 version -- app/version.sh@14 -- # cut -f2 00:06:18.132 21:38:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:18.132 21:38:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:18.132 21:38:40 version -- app/version.sh@19 -- # patch=0 00:06:18.132 21:38:40 version -- app/version.sh@20 -- # get_header_version suffix 00:06:18.132 21:38:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:18.132 21:38:40 version -- app/version.sh@14 -- # cut -f2 00:06:18.132 21:38:40 version -- app/version.sh@14 -- # tr -d '"' 00:06:18.132 21:38:40 version -- app/version.sh@20 -- # suffix=-pre 00:06:18.132 21:38:40 version -- app/version.sh@22 -- # version=25.1 00:06:18.132 21:38:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:18.132 21:38:41 version -- app/version.sh@28 -- # version=25.1rc0 00:06:18.132 21:38:41 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:18.132 21:38:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:18.132 21:38:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:18.132 21:38:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:18.132 00:06:18.132 real 0m0.304s 00:06:18.132 user 0m0.195s 00:06:18.132 sys 0m0.159s 00:06:18.132 21:38:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.132 21:38:41 version -- common/autotest_common.sh@10 -- # set +x 00:06:18.132 ************************************ 00:06:18.132 END TEST version 00:06:18.132 ************************************ 00:06:18.132 21:38:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:18.132 21:38:41 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:18.132 21:38:41 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:18.132 21:38:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.132 21:38:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.132 21:38:41 -- common/autotest_common.sh@10 -- # set +x 00:06:18.132 ************************************ 00:06:18.132 START TEST bdev_raid 00:06:18.132 ************************************ 00:06:18.132 21:38:41 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:18.132 * Looking for test storage... 00:06:18.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:18.132 21:38:41 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.132 21:38:41 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.132 21:38:41 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.393 21:38:41 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.393 21:38:41 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:18.393 21:38:41 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.393 21:38:41 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.393 --rc genhtml_branch_coverage=1 00:06:18.393 --rc genhtml_function_coverage=1 00:06:18.393 --rc genhtml_legend=1 00:06:18.393 --rc geninfo_all_blocks=1 00:06:18.393 --rc geninfo_unexecuted_blocks=1 00:06:18.393 00:06:18.393 ' 00:06:18.393 21:38:41 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.393 --rc genhtml_branch_coverage=1 00:06:18.393 --rc genhtml_function_coverage=1 00:06:18.393 --rc genhtml_legend=1 00:06:18.393 --rc geninfo_all_blocks=1 00:06:18.393 --rc geninfo_unexecuted_blocks=1 00:06:18.393 00:06:18.393 ' 00:06:18.393 21:38:41 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.393 --rc genhtml_branch_coverage=1 00:06:18.393 --rc genhtml_function_coverage=1 00:06:18.393 --rc genhtml_legend=1 00:06:18.393 --rc geninfo_all_blocks=1 00:06:18.393 --rc geninfo_unexecuted_blocks=1 00:06:18.393 00:06:18.393 ' 00:06:18.393 21:38:41 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.393 --rc genhtml_branch_coverage=1 00:06:18.393 --rc genhtml_function_coverage=1 00:06:18.393 --rc genhtml_legend=1 00:06:18.393 --rc geninfo_all_blocks=1 00:06:18.393 --rc geninfo_unexecuted_blocks=1 00:06:18.393 00:06:18.393 ' 00:06:18.393 21:38:41 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:18.393 21:38:41 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:18.393 21:38:41 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:18.393 21:38:41 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:18.393 21:38:41 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:18.393 21:38:41 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:18.393 21:38:41 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:18.393 21:38:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.394 21:38:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.394 21:38:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:18.394 ************************************ 00:06:18.394 START TEST raid1_resize_data_offset_test 00:06:18.394 ************************************ 00:06:18.394 21:38:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:18.394 21:38:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=71335 00:06:18.394 21:38:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 71335' 00:06:18.394 Process raid pid: 71335 00:06:18.394 21:38:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 71335 00:06:18.394 21:38:41 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:18.394 21:38:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 71335 ']' 00:06:18.394 21:38:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.394 21:38:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.394 21:38:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.394 21:38:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.394 21:38:41 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:18.394 [2024-11-27 21:38:41.412771] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:18.394 [2024-11-27 21:38:41.412893] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:18.654 [2024-11-27 21:38:41.567210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.654 [2024-11-27 21:38:41.592685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.654 [2024-11-27 21:38:41.635758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:18.654 [2024-11-27 21:38:41.635789] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.224 malloc0 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.224 malloc1 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.224 null0 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.224 [2024-11-27 21:38:42.307651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:19.224 [2024-11-27 21:38:42.309489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:19.224 [2024-11-27 21:38:42.309538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:19.224 [2024-11-27 21:38:42.309662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:19.224 [2024-11-27 21:38:42.309674] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:19.224 [2024-11-27 21:38:42.309993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:19.224 [2024-11-27 21:38:42.310137] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:19.224 [2024-11-27 21:38:42.310172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:19.224 [2024-11-27 21:38:42.310308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.224 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.485 [2024-11-27 21:38:42.367526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.485 malloc2 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.485 [2024-11-27 21:38:42.490690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:19.485 [2024-11-27 21:38:42.495809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.485 [2024-11-27 21:38:42.497800] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 71335 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 71335 ']' 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 71335 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71335 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71335' 00:06:19.485 killing process with pid 71335 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 71335 00:06:19.485 [2024-11-27 21:38:42.575057] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:19.485 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 71335 00:06:19.485 [2024-11-27 21:38:42.576093] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:19.485 [2024-11-27 21:38:42.576161] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:19.485 [2024-11-27 21:38:42.576196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:19.485 [2024-11-27 21:38:42.582195] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:19.485 [2024-11-27 21:38:42.582512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:19.485 [2024-11-27 21:38:42.582537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:19.744 [2024-11-27 21:38:42.791863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:20.004 21:38:42 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:20.004 00:06:20.004 real 0m1.660s 00:06:20.004 user 0m1.666s 00:06:20.004 sys 0m0.417s 00:06:20.004 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.004 21:38:42 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.004 ************************************ 00:06:20.004 END TEST raid1_resize_data_offset_test 00:06:20.004 ************************************ 00:06:20.004 21:38:43 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:20.004 21:38:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:20.004 21:38:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.004 21:38:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:20.004 ************************************ 00:06:20.004 START TEST raid0_resize_superblock_test 00:06:20.004 ************************************ 00:06:20.004 21:38:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:20.004 21:38:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:20.004 21:38:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71391 00:06:20.004 21:38:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:20.004 Process raid pid: 71391 00:06:20.004 21:38:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71391' 00:06:20.004 21:38:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71391 00:06:20.004 21:38:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71391 ']' 00:06:20.004 21:38:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.004 21:38:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.004 21:38:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.004 21:38:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.004 21:38:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.264 [2024-11-27 21:38:43.132593] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:20.264 [2024-11-27 21:38:43.132731] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:20.264 [2024-11-27 21:38:43.286249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.264 [2024-11-27 21:38:43.311221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.264 [2024-11-27 21:38:43.352720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:20.264 [2024-11-27 21:38:43.352762] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:21.205 21:38:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.205 21:38:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:21.205 21:38:43 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:21.205 21:38:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.205 21:38:43 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.205 malloc0 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.205 [2024-11-27 21:38:44.078451] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:21.205 [2024-11-27 21:38:44.078539] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.205 [2024-11-27 21:38:44.078563] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:21.205 [2024-11-27 21:38:44.078581] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.205 [2024-11-27 21:38:44.080687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.205 [2024-11-27 21:38:44.080725] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:21.205 pt0 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.205 51da91ad-0a84-4949-856a-de13021a95da 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.205 f0e5e993-0219-4b13-ae6d-c8f62f6e5852 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.205 200c9ea6-a3b5-4f98-b58e-ca007a5e6bb4 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.205 [2024-11-27 21:38:44.222960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f0e5e993-0219-4b13-ae6d-c8f62f6e5852 is claimed 00:06:21.205 [2024-11-27 21:38:44.223043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 200c9ea6-a3b5-4f98-b58e-ca007a5e6bb4 is claimed 00:06:21.205 [2024-11-27 21:38:44.223168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:21.205 [2024-11-27 21:38:44.223201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:21.205 [2024-11-27 21:38:44.223489] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:21.205 [2024-11-27 21:38:44.223653] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:21.205 [2024-11-27 21:38:44.223681] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:21.205 [2024-11-27 21:38:44.223838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.205 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.466 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:21.466 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:21.466 [2024-11-27 21:38:44.330982] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:21.466 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.466 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:21.466 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:21.466 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:21.466 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:21.466 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.466 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.466 [2024-11-27 21:38:44.378818] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:21.466 [2024-11-27 21:38:44.378843] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f0e5e993-0219-4b13-ae6d-c8f62f6e5852' was resized: old size 131072, new size 204800 00:06:21.466 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.466 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:21.466 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.467 [2024-11-27 21:38:44.390730] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:21.467 [2024-11-27 21:38:44.390755] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '200c9ea6-a3b5-4f98-b58e-ca007a5e6bb4' was resized: old size 131072, new size 204800 00:06:21.467 [2024-11-27 21:38:44.390778] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:21.467 [2024-11-27 21:38:44.498673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.467 [2024-11-27 21:38:44.542384] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:21.467 [2024-11-27 21:38:44.542453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:21.467 [2024-11-27 21:38:44.542465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:21.467 [2024-11-27 21:38:44.542476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:21.467 [2024-11-27 21:38:44.542647] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:21.467 [2024-11-27 21:38:44.542702] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:21.467 [2024-11-27 21:38:44.542715] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.467 [2024-11-27 21:38:44.554329] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:21.467 [2024-11-27 21:38:44.554381] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.467 [2024-11-27 21:38:44.554401] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:21.467 [2024-11-27 21:38:44.554411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.467 [2024-11-27 21:38:44.556476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.467 [2024-11-27 21:38:44.556513] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:21.467 [2024-11-27 21:38:44.557998] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f0e5e993-0219-4b13-ae6d-c8f62f6e5852 00:06:21.467 [2024-11-27 21:38:44.558054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f0e5e993-0219-4b13-ae6d-c8f62f6e5852 is claimed 00:06:21.467 [2024-11-27 21:38:44.558136] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 200c9ea6-a3b5-4f98-b58e-ca007a5e6bb4 00:06:21.467 [2024-11-27 21:38:44.558173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 200c9ea6-a3b5-4f98-b58e-ca007a5e6bb4 is claimed 00:06:21.467 [2024-11-27 21:38:44.558282] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 200c9ea6-a3b5-4f98-b58e-ca007a5e6bb4 (2) smaller than existing raid bdev Raid (3) 00:06:21.467 [2024-11-27 21:38:44.558310] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev f0e5e993-0219-4b13-ae6d-c8f62f6e5852: File exists 00:06:21.467 [2024-11-27 21:38:44.558355] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:21.467 [2024-11-27 21:38:44.558365] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:21.467 [2024-11-27 21:38:44.558622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:21.467 [2024-11-27 21:38:44.558820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:21.467 [2024-11-27 21:38:44.558838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:21.467 [2024-11-27 21:38:44.558997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:21.467 pt0 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.467 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.467 [2024-11-27 21:38:44.582532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71391 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71391 ']' 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71391 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71391 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.728 killing process with pid 71391 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71391' 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 71391 00:06:21.728 [2024-11-27 21:38:44.647821] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:21.728 [2024-11-27 21:38:44.647896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:21.728 [2024-11-27 21:38:44.647952] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:21.728 [2024-11-27 21:38:44.647963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:21.728 21:38:44 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 71391 00:06:21.728 [2024-11-27 21:38:44.806678] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:21.988 21:38:45 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:21.988 00:06:21.988 real 0m1.960s 00:06:21.988 user 0m2.246s 00:06:21.988 sys 0m0.445s 00:06:21.988 21:38:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.988 21:38:45 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.988 ************************************ 00:06:21.988 END TEST raid0_resize_superblock_test 00:06:21.988 ************************************ 00:06:21.988 21:38:45 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:21.988 21:38:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:21.988 21:38:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.988 21:38:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:21.988 ************************************ 00:06:21.988 START TEST raid1_resize_superblock_test 00:06:21.988 ************************************ 00:06:21.988 21:38:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:21.988 21:38:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:21.988 21:38:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:21.988 21:38:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=71464 00:06:21.988 Process raid pid: 71464 00:06:21.988 21:38:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 71464' 00:06:21.988 21:38:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 71464 00:06:21.988 21:38:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71464 ']' 00:06:21.988 21:38:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.988 21:38:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.988 21:38:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.988 21:38:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.988 21:38:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.248 [2024-11-27 21:38:45.161318] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:22.248 [2024-11-27 21:38:45.161787] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:22.248 [2024-11-27 21:38:45.317453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.248 [2024-11-27 21:38:45.342114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.508 [2024-11-27 21:38:45.383698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:22.508 [2024-11-27 21:38:45.383737] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:23.087 21:38:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.088 21:38:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:23.088 21:38:45 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:23.088 21:38:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.088 21:38:45 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.088 malloc0 00:06:23.088 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.088 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:23.088 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.088 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.088 [2024-11-27 21:38:46.125212] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:23.088 [2024-11-27 21:38:46.125275] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:23.088 [2024-11-27 21:38:46.125299] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:23.088 [2024-11-27 21:38:46.125310] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:23.088 [2024-11-27 21:38:46.127431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:23.088 [2024-11-27 21:38:46.127471] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:23.088 pt0 00:06:23.088 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.088 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:23.088 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.088 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.348 44b2b045-1cb4-4abe-8260-0fe995163030 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.348 919fe2db-e0ac-4b2a-9511-5b1cb1efb943 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.348 c56eb24f-2d9a-4461-8399-ec8207118fd4 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.348 [2024-11-27 21:38:46.261472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 919fe2db-e0ac-4b2a-9511-5b1cb1efb943 is claimed 00:06:23.348 [2024-11-27 21:38:46.261580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c56eb24f-2d9a-4461-8399-ec8207118fd4 is claimed 00:06:23.348 [2024-11-27 21:38:46.261686] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:23.348 [2024-11-27 21:38:46.261698] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:23.348 [2024-11-27 21:38:46.262040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:23.348 [2024-11-27 21:38:46.262224] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:23.348 [2024-11-27 21:38:46.262245] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:23.348 [2024-11-27 21:38:46.262404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:23.348 [2024-11-27 21:38:46.373483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:23.348 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:23.349 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:23.349 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:23.349 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.349 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.349 [2024-11-27 21:38:46.421308] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:23.349 [2024-11-27 21:38:46.421335] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '919fe2db-e0ac-4b2a-9511-5b1cb1efb943' was resized: old size 131072, new size 204800 00:06:23.349 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.349 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:23.349 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.349 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.349 [2024-11-27 21:38:46.433238] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:23.349 [2024-11-27 21:38:46.433263] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'c56eb24f-2d9a-4461-8399-ec8207118fd4' was resized: old size 131072, new size 204800 00:06:23.349 [2024-11-27 21:38:46.433285] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:23.349 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.349 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:23.349 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.349 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.349 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:23.349 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:23.609 [2024-11-27 21:38:46.525208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:23.609 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.610 [2024-11-27 21:38:46.548953] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:23.610 [2024-11-27 21:38:46.549018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:23.610 [2024-11-27 21:38:46.549043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:23.610 [2024-11-27 21:38:46.549236] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:23.610 [2024-11-27 21:38:46.549428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:23.610 [2024-11-27 21:38:46.549488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:23.610 [2024-11-27 21:38:46.549502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.610 [2024-11-27 21:38:46.560915] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:23.610 [2024-11-27 21:38:46.560964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:23.610 [2024-11-27 21:38:46.560985] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:06:23.610 [2024-11-27 21:38:46.560995] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:23.610 [2024-11-27 21:38:46.563150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:23.610 [2024-11-27 21:38:46.563184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:23.610 [2024-11-27 21:38:46.564699] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 919fe2db-e0ac-4b2a-9511-5b1cb1efb943 00:06:23.610 [2024-11-27 21:38:46.564760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 919fe2db-e0ac-4b2a-9511-5b1cb1efb943 is claimed 00:06:23.610 [2024-11-27 21:38:46.564867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev c56eb24f-2d9a-4461-8399-ec8207118fd4 00:06:23.610 [2024-11-27 21:38:46.564898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev c56eb24f-2d9a-4461-8399-ec8207118fd4 is claimed 00:06:23.610 [2024-11-27 21:38:46.565014] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev c56eb24f-2d9a-4461-8399-ec8207118fd4 (2) smaller than existing raid bdev Raid (3) 00:06:23.610 [2024-11-27 21:38:46.565058] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 919fe2db-e0ac-4b2a-9511-5b1cb1efb943: File exists 00:06:23.610 [2024-11-27 21:38:46.565097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:06:23.610 [2024-11-27 21:38:46.565107] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:23.610 [2024-11-27 21:38:46.565356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:06:23.610 [2024-11-27 21:38:46.565545] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:06:23.610 [2024-11-27 21:38:46.565564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001580 00:06:23.610 [2024-11-27 21:38:46.565731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:23.610 pt0 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:23.610 [2024-11-27 21:38:46.585262] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 71464 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71464 ']' 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71464 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71464 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.610 killing process with pid 71464 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71464' 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 71464 00:06:23.610 [2024-11-27 21:38:46.669746] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:23.610 [2024-11-27 21:38:46.669854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:23.610 [2024-11-27 21:38:46.669918] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:23.610 [2024-11-27 21:38:46.669941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Raid, state offline 00:06:23.610 21:38:46 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 71464 00:06:23.870 [2024-11-27 21:38:46.827980] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:24.130 21:38:47 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:24.130 00:06:24.130 real 0m1.956s 00:06:24.130 user 0m2.198s 00:06:24.130 sys 0m0.460s 00:06:24.130 21:38:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.130 21:38:47 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.130 ************************************ 00:06:24.130 END TEST raid1_resize_superblock_test 00:06:24.130 ************************************ 00:06:24.130 21:38:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:24.130 21:38:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:24.130 21:38:47 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:24.130 21:38:47 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:24.130 21:38:47 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:24.130 21:38:47 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:24.130 21:38:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:24.130 21:38:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.130 21:38:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:24.130 ************************************ 00:06:24.130 START TEST raid_function_test_raid0 00:06:24.130 ************************************ 00:06:24.130 21:38:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:24.130 21:38:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:24.130 21:38:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:24.130 21:38:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:24.130 21:38:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=71539 00:06:24.130 21:38:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:24.130 Process raid pid: 71539 00:06:24.130 21:38:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71539' 00:06:24.130 21:38:47 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 71539 00:06:24.130 21:38:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 71539 ']' 00:06:24.130 21:38:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.130 21:38:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.130 21:38:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.130 21:38:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.130 21:38:47 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:24.130 [2024-11-27 21:38:47.193636] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:24.130 [2024-11-27 21:38:47.193828] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:24.391 [2024-11-27 21:38:47.358756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.391 [2024-11-27 21:38:47.383369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.391 [2024-11-27 21:38:47.425153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:24.391 [2024-11-27 21:38:47.425189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:24.960 Base_1 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:24.960 Base_2 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:24.960 [2024-11-27 21:38:48.064880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:24.960 [2024-11-27 21:38:48.066638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:24.960 [2024-11-27 21:38:48.066704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:24.960 [2024-11-27 21:38:48.066716] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:24.960 [2024-11-27 21:38:48.066983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:24.960 [2024-11-27 21:38:48.067180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:24.960 [2024-11-27 21:38:48.067205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:24.960 [2024-11-27 21:38:48.067338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.960 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:25.222 [2024-11-27 21:38:48.300508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:25.222 /dev/nbd0 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:25.222 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:25.483 1+0 records in 00:06:25.483 1+0 records out 00:06:25.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474152 s, 8.6 MB/s 00:06:25.483 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.483 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:25.483 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:25.483 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:25.483 21:38:48 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:25.483 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.483 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:25.483 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:25.483 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:25.483 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:25.483 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.483 { 00:06:25.483 "nbd_device": "/dev/nbd0", 00:06:25.483 "bdev_name": "raid" 00:06:25.483 } 00:06:25.483 ]' 00:06:25.483 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.483 { 00:06:25.483 "nbd_device": "/dev/nbd0", 00:06:25.483 "bdev_name": "raid" 00:06:25.483 } 00:06:25.483 ]' 00:06:25.483 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:25.746 4096+0 records in 00:06:25.746 4096+0 records out 00:06:25.746 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0255807 s, 82.0 MB/s 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:25.746 4096+0 records in 00:06:25.746 4096+0 records out 00:06:25.746 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.184532 s, 11.4 MB/s 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:25.746 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:26.005 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:26.005 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:26.005 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:26.005 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:26.005 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:26.005 128+0 records in 00:06:26.005 128+0 records out 00:06:26.005 65536 bytes (66 kB, 64 KiB) copied, 0.00129516 s, 50.6 MB/s 00:06:26.005 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:26.005 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:26.005 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:26.005 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:26.005 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:26.005 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:26.005 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:26.005 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:26.005 2035+0 records in 00:06:26.005 2035+0 records out 00:06:26.006 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0148836 s, 70.0 MB/s 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:26.006 456+0 records in 00:06:26.006 456+0 records out 00:06:26.006 233472 bytes (233 kB, 228 KiB) copied, 0.00267619 s, 87.2 MB/s 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.006 21:38:48 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:26.265 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.266 [2024-11-27 21:38:49.171581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:26.266 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.266 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.266 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.266 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.266 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.266 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:26.266 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.266 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:26.266 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:26.266 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:26.266 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.266 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.266 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.525 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.525 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.525 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.525 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:26.525 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.525 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.525 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:26.525 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:26.525 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 71539 00:06:26.525 21:38:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 71539 ']' 00:06:26.525 21:38:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 71539 00:06:26.525 21:38:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:26.525 21:38:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.525 21:38:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71539 00:06:26.525 killing process with pid 71539 00:06:26.526 21:38:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.526 21:38:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.526 21:38:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71539' 00:06:26.526 21:38:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 71539 00:06:26.526 [2024-11-27 21:38:49.486968] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:26.526 [2024-11-27 21:38:49.487076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:26.526 [2024-11-27 21:38:49.487131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:26.526 21:38:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 71539 00:06:26.526 [2024-11-27 21:38:49.487149] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:26.526 [2024-11-27 21:38:49.508971] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:26.785 21:38:49 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:26.785 00:06:26.785 real 0m2.596s 00:06:26.785 user 0m3.241s 00:06:26.785 sys 0m0.869s 00:06:26.785 ************************************ 00:06:26.785 END TEST raid_function_test_raid0 00:06:26.785 ************************************ 00:06:26.785 21:38:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.785 21:38:49 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:26.785 21:38:49 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:26.785 21:38:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:26.785 21:38:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.785 21:38:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:26.785 ************************************ 00:06:26.785 START TEST raid_function_test_concat 00:06:26.785 ************************************ 00:06:26.785 21:38:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:26.785 21:38:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:26.785 21:38:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:26.786 21:38:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:26.786 Process raid pid: 71655 00:06:26.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.786 21:38:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=71655 00:06:26.786 21:38:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:26.786 21:38:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 71655' 00:06:26.786 21:38:49 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 71655 00:06:26.786 21:38:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 71655 ']' 00:06:26.786 21:38:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.786 21:38:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.786 21:38:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.786 21:38:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.786 21:38:49 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:26.786 [2024-11-27 21:38:49.868038] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:26.786 [2024-11-27 21:38:49.868163] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.046 [2024-11-27 21:38:50.021094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.046 [2024-11-27 21:38:50.047419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.046 [2024-11-27 21:38:50.089648] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.046 [2024-11-27 21:38:50.089682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:27.617 21:38:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.617 21:38:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:27.618 21:38:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:27.618 21:38:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.618 21:38:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:27.618 Base_1 00:06:27.618 21:38:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.618 21:38:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:27.618 21:38:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.618 21:38:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:27.618 Base_2 00:06:27.618 21:38:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.618 21:38:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:27.618 21:38:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.618 21:38:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:27.618 [2024-11-27 21:38:50.737730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:27.878 [2024-11-27 21:38:50.739622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:27.878 [2024-11-27 21:38:50.739690] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:27.878 [2024-11-27 21:38:50.739702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:27.878 [2024-11-27 21:38:50.740047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:27.878 [2024-11-27 21:38:50.740296] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:27.878 [2024-11-27 21:38:50.740319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000001200 00:06:27.878 [2024-11-27 21:38:50.740445] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:27.878 21:38:50 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:27.878 [2024-11-27 21:38:50.977359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:06:27.878 /dev/nbd0 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:28.138 1+0 records in 00:06:28.138 1+0 records out 00:06:28.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000508933 s, 8.0 MB/s 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.138 { 00:06:28.138 "nbd_device": "/dev/nbd0", 00:06:28.138 "bdev_name": "raid" 00:06:28.138 } 00:06:28.138 ]' 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.138 { 00:06:28.138 "nbd_device": "/dev/nbd0", 00:06:28.138 "bdev_name": "raid" 00:06:28.138 } 00:06:28.138 ]' 00:06:28.138 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:28.398 4096+0 records in 00:06:28.398 4096+0 records out 00:06:28.398 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0332667 s, 63.0 MB/s 00:06:28.398 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:28.659 4096+0 records in 00:06:28.659 4096+0 records out 00:06:28.659 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.182985 s, 11.5 MB/s 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:28.659 128+0 records in 00:06:28.659 128+0 records out 00:06:28.659 65536 bytes (66 kB, 64 KiB) copied, 0.00118715 s, 55.2 MB/s 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:28.659 2035+0 records in 00:06:28.659 2035+0 records out 00:06:28.659 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0128582 s, 81.0 MB/s 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:28.659 456+0 records in 00:06:28.659 456+0 records out 00:06:28.659 233472 bytes (233 kB, 228 KiB) copied, 0.00282759 s, 82.6 MB/s 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.659 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:28.920 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.920 [2024-11-27 21:38:51.855645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:28.920 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.920 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.920 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.920 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.920 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.920 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:28.920 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.920 21:38:51 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:28.920 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:28.920 21:38:51 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:29.188 21:38:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:29.188 21:38:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:29.188 21:38:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.188 21:38:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:29.188 21:38:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:29.188 21:38:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.188 21:38:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:29.188 21:38:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:29.188 21:38:52 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:29.189 21:38:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:29.189 21:38:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:29.189 21:38:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 71655 00:06:29.189 21:38:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 71655 ']' 00:06:29.189 21:38:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 71655 00:06:29.189 21:38:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:29.189 21:38:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.189 21:38:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71655 00:06:29.189 killing process with pid 71655 00:06:29.189 21:38:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.189 21:38:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.189 21:38:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71655' 00:06:29.189 21:38:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 71655 00:06:29.189 [2024-11-27 21:38:52.178050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:29.189 [2024-11-27 21:38:52.178150] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:29.189 [2024-11-27 21:38:52.178211] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:29.189 [2024-11-27 21:38:52.178225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid, state offline 00:06:29.189 21:38:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 71655 00:06:29.189 [2024-11-27 21:38:52.200078] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:29.454 21:38:52 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:29.454 00:06:29.454 real 0m2.623s 00:06:29.454 user 0m3.274s 00:06:29.454 sys 0m0.892s 00:06:29.454 21:38:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.454 ************************************ 00:06:29.454 END TEST raid_function_test_concat 00:06:29.454 ************************************ 00:06:29.454 21:38:52 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:29.454 21:38:52 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:29.454 21:38:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:29.455 21:38:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.455 21:38:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:29.455 ************************************ 00:06:29.455 START TEST raid0_resize_test 00:06:29.455 ************************************ 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71766 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71766' 00:06:29.455 Process raid pid: 71766 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71766 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 71766 ']' 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.455 21:38:52 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.455 [2024-11-27 21:38:52.558485] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:29.455 [2024-11-27 21:38:52.558595] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.716 [2024-11-27 21:38:52.693409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.716 [2024-11-27 21:38:52.718844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.716 [2024-11-27 21:38:52.761523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:29.716 [2024-11-27 21:38:52.761561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:30.285 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.285 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:30.285 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:30.285 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.285 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.285 Base_1 00:06:30.285 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.285 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:30.285 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.285 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.285 Base_2 00:06:30.285 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.285 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:30.285 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:30.285 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.285 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.545 [2024-11-27 21:38:53.407662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:30.545 [2024-11-27 21:38:53.409604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:30.545 [2024-11-27 21:38:53.409659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:30.545 [2024-11-27 21:38:53.409670] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:30.545 [2024-11-27 21:38:53.409945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:30.545 [2024-11-27 21:38:53.410053] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:30.545 [2024-11-27 21:38:53.410062] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:30.545 [2024-11-27 21:38:53.410178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:30.545 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.545 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:30.545 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.545 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.545 [2024-11-27 21:38:53.419620] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:30.545 [2024-11-27 21:38:53.419692] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:30.545 true 00:06:30.545 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.545 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:30.545 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:30.545 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.545 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.546 [2024-11-27 21:38:53.435775] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.546 [2024-11-27 21:38:53.479501] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:30.546 [2024-11-27 21:38:53.479564] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:30.546 [2024-11-27 21:38:53.479661] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:30.546 true 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.546 [2024-11-27 21:38:53.495657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71766 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 71766 ']' 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 71766 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71766 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71766' 00:06:30.546 killing process with pid 71766 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 71766 00:06:30.546 [2024-11-27 21:38:53.561566] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:30.546 [2024-11-27 21:38:53.561691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:30.546 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 71766 00:06:30.546 [2024-11-27 21:38:53.561790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:30.546 [2024-11-27 21:38:53.561813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:30.546 [2024-11-27 21:38:53.563272] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:30.806 21:38:53 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:30.806 00:06:30.806 real 0m1.293s 00:06:30.806 user 0m1.456s 00:06:30.806 sys 0m0.282s 00:06:30.806 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.806 21:38:53 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.806 ************************************ 00:06:30.806 END TEST raid0_resize_test 00:06:30.806 ************************************ 00:06:30.806 21:38:53 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:30.806 21:38:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:30.806 21:38:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.806 21:38:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:30.806 ************************************ 00:06:30.806 START TEST raid1_resize_test 00:06:30.806 ************************************ 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=71817 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 71817' 00:06:30.806 Process raid pid: 71817 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 71817 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 71817 ']' 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.806 21:38:53 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:30.806 [2024-11-27 21:38:53.914520] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:30.806 [2024-11-27 21:38:53.914729] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.066 [2024-11-27 21:38:54.068927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.066 [2024-11-27 21:38:54.094033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.066 [2024-11-27 21:38:54.135454] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.066 [2024-11-27 21:38:54.135570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:31.637 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.637 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:31.637 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:31.637 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.637 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.637 Base_1 00:06:31.637 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.637 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:31.637 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.637 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.898 Base_2 00:06:31.898 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.898 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:31.898 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:31.898 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.898 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.898 [2024-11-27 21:38:54.773740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:31.898 [2024-11-27 21:38:54.775629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:31.898 [2024-11-27 21:38:54.775691] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:31.898 [2024-11-27 21:38:54.775703] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:31.898 [2024-11-27 21:38:54.775962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000021f0 00:06:31.898 [2024-11-27 21:38:54.776065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:31.899 [2024-11-27 21:38:54.776072] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000001200 00:06:31.899 [2024-11-27 21:38:54.776196] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.899 [2024-11-27 21:38:54.785723] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:31.899 [2024-11-27 21:38:54.785825] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:31.899 true 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.899 [2024-11-27 21:38:54.801902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.899 [2024-11-27 21:38:54.845594] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:31.899 [2024-11-27 21:38:54.845617] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:31.899 [2024-11-27 21:38:54.845645] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:31.899 true 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.899 [2024-11-27 21:38:54.861741] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 71817 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 71817 ']' 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 71817 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71817 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71817' 00:06:31.899 killing process with pid 71817 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 71817 00:06:31.899 [2024-11-27 21:38:54.928466] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:31.899 [2024-11-27 21:38:54.928593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:31.899 21:38:54 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 71817 00:06:31.899 [2024-11-27 21:38:54.929085] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:31.899 [2024-11-27 21:38:54.929164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Raid, state offline 00:06:31.899 [2024-11-27 21:38:54.930371] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:32.160 21:38:55 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:32.160 00:06:32.160 real 0m1.306s 00:06:32.160 user 0m1.473s 00:06:32.160 sys 0m0.288s 00:06:32.160 21:38:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.160 21:38:55 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.160 ************************************ 00:06:32.160 END TEST raid1_resize_test 00:06:32.160 ************************************ 00:06:32.160 21:38:55 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:32.160 21:38:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:32.160 21:38:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:32.160 21:38:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:32.160 21:38:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.160 21:38:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:32.160 ************************************ 00:06:32.160 START TEST raid_state_function_test 00:06:32.160 ************************************ 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:32.160 Process raid pid: 71863 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71863 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71863' 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71863 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71863 ']' 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.160 21:38:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.421 [2024-11-27 21:38:55.299375] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:32.421 [2024-11-27 21:38:55.299578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.421 [2024-11-27 21:38:55.433757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.421 [2024-11-27 21:38:55.458680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.421 [2024-11-27 21:38:55.500067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.421 [2024-11-27 21:38:55.500209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.384 [2024-11-27 21:38:56.130297] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:33.384 [2024-11-27 21:38:56.130444] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:33.384 [2024-11-27 21:38:56.130463] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:33.384 [2024-11-27 21:38:56.130476] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:33.384 "name": "Existed_Raid", 00:06:33.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.384 "strip_size_kb": 64, 00:06:33.384 "state": "configuring", 00:06:33.384 "raid_level": "raid0", 00:06:33.384 "superblock": false, 00:06:33.384 "num_base_bdevs": 2, 00:06:33.384 "num_base_bdevs_discovered": 0, 00:06:33.384 "num_base_bdevs_operational": 2, 00:06:33.384 "base_bdevs_list": [ 00:06:33.384 { 00:06:33.384 "name": "BaseBdev1", 00:06:33.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.384 "is_configured": false, 00:06:33.384 "data_offset": 0, 00:06:33.384 "data_size": 0 00:06:33.384 }, 00:06:33.384 { 00:06:33.384 "name": "BaseBdev2", 00:06:33.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.384 "is_configured": false, 00:06:33.384 "data_offset": 0, 00:06:33.384 "data_size": 0 00:06:33.384 } 00:06:33.384 ] 00:06:33.384 }' 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:33.384 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.667 [2024-11-27 21:38:56.593436] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:33.667 [2024-11-27 21:38:56.593526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.667 [2024-11-27 21:38:56.601400] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:33.667 [2024-11-27 21:38:56.601490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:33.667 [2024-11-27 21:38:56.601531] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:33.667 [2024-11-27 21:38:56.601599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.667 [2024-11-27 21:38:56.618074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:33.667 BaseBdev1 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.667 [ 00:06:33.667 { 00:06:33.667 "name": "BaseBdev1", 00:06:33.667 "aliases": [ 00:06:33.667 "8f06e1c3-420e-47a8-b122-bbef174cc6c2" 00:06:33.667 ], 00:06:33.667 "product_name": "Malloc disk", 00:06:33.667 "block_size": 512, 00:06:33.667 "num_blocks": 65536, 00:06:33.667 "uuid": "8f06e1c3-420e-47a8-b122-bbef174cc6c2", 00:06:33.667 "assigned_rate_limits": { 00:06:33.667 "rw_ios_per_sec": 0, 00:06:33.667 "rw_mbytes_per_sec": 0, 00:06:33.667 "r_mbytes_per_sec": 0, 00:06:33.667 "w_mbytes_per_sec": 0 00:06:33.667 }, 00:06:33.667 "claimed": true, 00:06:33.667 "claim_type": "exclusive_write", 00:06:33.667 "zoned": false, 00:06:33.667 "supported_io_types": { 00:06:33.667 "read": true, 00:06:33.667 "write": true, 00:06:33.667 "unmap": true, 00:06:33.667 "flush": true, 00:06:33.667 "reset": true, 00:06:33.667 "nvme_admin": false, 00:06:33.667 "nvme_io": false, 00:06:33.667 "nvme_io_md": false, 00:06:33.667 "write_zeroes": true, 00:06:33.667 "zcopy": true, 00:06:33.667 "get_zone_info": false, 00:06:33.667 "zone_management": false, 00:06:33.667 "zone_append": false, 00:06:33.667 "compare": false, 00:06:33.667 "compare_and_write": false, 00:06:33.667 "abort": true, 00:06:33.667 "seek_hole": false, 00:06:33.667 "seek_data": false, 00:06:33.667 "copy": true, 00:06:33.667 "nvme_iov_md": false 00:06:33.667 }, 00:06:33.667 "memory_domains": [ 00:06:33.667 { 00:06:33.667 "dma_device_id": "system", 00:06:33.667 "dma_device_type": 1 00:06:33.667 }, 00:06:33.667 { 00:06:33.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:33.667 "dma_device_type": 2 00:06:33.667 } 00:06:33.667 ], 00:06:33.667 "driver_specific": {} 00:06:33.667 } 00:06:33.667 ] 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:33.667 "name": "Existed_Raid", 00:06:33.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.667 "strip_size_kb": 64, 00:06:33.667 "state": "configuring", 00:06:33.667 "raid_level": "raid0", 00:06:33.667 "superblock": false, 00:06:33.667 "num_base_bdevs": 2, 00:06:33.667 "num_base_bdevs_discovered": 1, 00:06:33.667 "num_base_bdevs_operational": 2, 00:06:33.667 "base_bdevs_list": [ 00:06:33.667 { 00:06:33.667 "name": "BaseBdev1", 00:06:33.667 "uuid": "8f06e1c3-420e-47a8-b122-bbef174cc6c2", 00:06:33.667 "is_configured": true, 00:06:33.667 "data_offset": 0, 00:06:33.667 "data_size": 65536 00:06:33.667 }, 00:06:33.667 { 00:06:33.667 "name": "BaseBdev2", 00:06:33.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:33.667 "is_configured": false, 00:06:33.667 "data_offset": 0, 00:06:33.667 "data_size": 0 00:06:33.667 } 00:06:33.667 ] 00:06:33.667 }' 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:33.667 21:38:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.248 [2024-11-27 21:38:57.065373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:34.248 [2024-11-27 21:38:57.065427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.248 [2024-11-27 21:38:57.077374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:34.248 [2024-11-27 21:38:57.079231] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:34.248 [2024-11-27 21:38:57.079333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:34.248 "name": "Existed_Raid", 00:06:34.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.248 "strip_size_kb": 64, 00:06:34.248 "state": "configuring", 00:06:34.248 "raid_level": "raid0", 00:06:34.248 "superblock": false, 00:06:34.248 "num_base_bdevs": 2, 00:06:34.248 "num_base_bdevs_discovered": 1, 00:06:34.248 "num_base_bdevs_operational": 2, 00:06:34.248 "base_bdevs_list": [ 00:06:34.248 { 00:06:34.248 "name": "BaseBdev1", 00:06:34.248 "uuid": "8f06e1c3-420e-47a8-b122-bbef174cc6c2", 00:06:34.248 "is_configured": true, 00:06:34.248 "data_offset": 0, 00:06:34.248 "data_size": 65536 00:06:34.248 }, 00:06:34.248 { 00:06:34.248 "name": "BaseBdev2", 00:06:34.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:34.248 "is_configured": false, 00:06:34.248 "data_offset": 0, 00:06:34.248 "data_size": 0 00:06:34.248 } 00:06:34.248 ] 00:06:34.248 }' 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:34.248 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.507 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:34.507 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.507 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.507 [2024-11-27 21:38:57.539458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:34.507 [2024-11-27 21:38:57.539571] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:34.507 [2024-11-27 21:38:57.539634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:34.508 [2024-11-27 21:38:57.539996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:34.508 [2024-11-27 21:38:57.540212] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:34.508 [2024-11-27 21:38:57.540265] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:34.508 [2024-11-27 21:38:57.540556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:34.508 BaseBdev2 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.508 [ 00:06:34.508 { 00:06:34.508 "name": "BaseBdev2", 00:06:34.508 "aliases": [ 00:06:34.508 "1e5c1fa3-d373-43a3-8a61-6d7725fc4040" 00:06:34.508 ], 00:06:34.508 "product_name": "Malloc disk", 00:06:34.508 "block_size": 512, 00:06:34.508 "num_blocks": 65536, 00:06:34.508 "uuid": "1e5c1fa3-d373-43a3-8a61-6d7725fc4040", 00:06:34.508 "assigned_rate_limits": { 00:06:34.508 "rw_ios_per_sec": 0, 00:06:34.508 "rw_mbytes_per_sec": 0, 00:06:34.508 "r_mbytes_per_sec": 0, 00:06:34.508 "w_mbytes_per_sec": 0 00:06:34.508 }, 00:06:34.508 "claimed": true, 00:06:34.508 "claim_type": "exclusive_write", 00:06:34.508 "zoned": false, 00:06:34.508 "supported_io_types": { 00:06:34.508 "read": true, 00:06:34.508 "write": true, 00:06:34.508 "unmap": true, 00:06:34.508 "flush": true, 00:06:34.508 "reset": true, 00:06:34.508 "nvme_admin": false, 00:06:34.508 "nvme_io": false, 00:06:34.508 "nvme_io_md": false, 00:06:34.508 "write_zeroes": true, 00:06:34.508 "zcopy": true, 00:06:34.508 "get_zone_info": false, 00:06:34.508 "zone_management": false, 00:06:34.508 "zone_append": false, 00:06:34.508 "compare": false, 00:06:34.508 "compare_and_write": false, 00:06:34.508 "abort": true, 00:06:34.508 "seek_hole": false, 00:06:34.508 "seek_data": false, 00:06:34.508 "copy": true, 00:06:34.508 "nvme_iov_md": false 00:06:34.508 }, 00:06:34.508 "memory_domains": [ 00:06:34.508 { 00:06:34.508 "dma_device_id": "system", 00:06:34.508 "dma_device_type": 1 00:06:34.508 }, 00:06:34.508 { 00:06:34.508 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:34.508 "dma_device_type": 2 00:06:34.508 } 00:06:34.508 ], 00:06:34.508 "driver_specific": {} 00:06:34.508 } 00:06:34.508 ] 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:34.508 "name": "Existed_Raid", 00:06:34.508 "uuid": "0a2a4ac2-081e-49aa-9a6a-d1eaccec0796", 00:06:34.508 "strip_size_kb": 64, 00:06:34.508 "state": "online", 00:06:34.508 "raid_level": "raid0", 00:06:34.508 "superblock": false, 00:06:34.508 "num_base_bdevs": 2, 00:06:34.508 "num_base_bdevs_discovered": 2, 00:06:34.508 "num_base_bdevs_operational": 2, 00:06:34.508 "base_bdevs_list": [ 00:06:34.508 { 00:06:34.508 "name": "BaseBdev1", 00:06:34.508 "uuid": "8f06e1c3-420e-47a8-b122-bbef174cc6c2", 00:06:34.508 "is_configured": true, 00:06:34.508 "data_offset": 0, 00:06:34.508 "data_size": 65536 00:06:34.508 }, 00:06:34.508 { 00:06:34.508 "name": "BaseBdev2", 00:06:34.508 "uuid": "1e5c1fa3-d373-43a3-8a61-6d7725fc4040", 00:06:34.508 "is_configured": true, 00:06:34.508 "data_offset": 0, 00:06:34.508 "data_size": 65536 00:06:34.508 } 00:06:34.508 ] 00:06:34.508 }' 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:34.508 21:38:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.076 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:35.076 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:35.077 [2024-11-27 21:38:58.018965] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:35.077 "name": "Existed_Raid", 00:06:35.077 "aliases": [ 00:06:35.077 "0a2a4ac2-081e-49aa-9a6a-d1eaccec0796" 00:06:35.077 ], 00:06:35.077 "product_name": "Raid Volume", 00:06:35.077 "block_size": 512, 00:06:35.077 "num_blocks": 131072, 00:06:35.077 "uuid": "0a2a4ac2-081e-49aa-9a6a-d1eaccec0796", 00:06:35.077 "assigned_rate_limits": { 00:06:35.077 "rw_ios_per_sec": 0, 00:06:35.077 "rw_mbytes_per_sec": 0, 00:06:35.077 "r_mbytes_per_sec": 0, 00:06:35.077 "w_mbytes_per_sec": 0 00:06:35.077 }, 00:06:35.077 "claimed": false, 00:06:35.077 "zoned": false, 00:06:35.077 "supported_io_types": { 00:06:35.077 "read": true, 00:06:35.077 "write": true, 00:06:35.077 "unmap": true, 00:06:35.077 "flush": true, 00:06:35.077 "reset": true, 00:06:35.077 "nvme_admin": false, 00:06:35.077 "nvme_io": false, 00:06:35.077 "nvme_io_md": false, 00:06:35.077 "write_zeroes": true, 00:06:35.077 "zcopy": false, 00:06:35.077 "get_zone_info": false, 00:06:35.077 "zone_management": false, 00:06:35.077 "zone_append": false, 00:06:35.077 "compare": false, 00:06:35.077 "compare_and_write": false, 00:06:35.077 "abort": false, 00:06:35.077 "seek_hole": false, 00:06:35.077 "seek_data": false, 00:06:35.077 "copy": false, 00:06:35.077 "nvme_iov_md": false 00:06:35.077 }, 00:06:35.077 "memory_domains": [ 00:06:35.077 { 00:06:35.077 "dma_device_id": "system", 00:06:35.077 "dma_device_type": 1 00:06:35.077 }, 00:06:35.077 { 00:06:35.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.077 "dma_device_type": 2 00:06:35.077 }, 00:06:35.077 { 00:06:35.077 "dma_device_id": "system", 00:06:35.077 "dma_device_type": 1 00:06:35.077 }, 00:06:35.077 { 00:06:35.077 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.077 "dma_device_type": 2 00:06:35.077 } 00:06:35.077 ], 00:06:35.077 "driver_specific": { 00:06:35.077 "raid": { 00:06:35.077 "uuid": "0a2a4ac2-081e-49aa-9a6a-d1eaccec0796", 00:06:35.077 "strip_size_kb": 64, 00:06:35.077 "state": "online", 00:06:35.077 "raid_level": "raid0", 00:06:35.077 "superblock": false, 00:06:35.077 "num_base_bdevs": 2, 00:06:35.077 "num_base_bdevs_discovered": 2, 00:06:35.077 "num_base_bdevs_operational": 2, 00:06:35.077 "base_bdevs_list": [ 00:06:35.077 { 00:06:35.077 "name": "BaseBdev1", 00:06:35.077 "uuid": "8f06e1c3-420e-47a8-b122-bbef174cc6c2", 00:06:35.077 "is_configured": true, 00:06:35.077 "data_offset": 0, 00:06:35.077 "data_size": 65536 00:06:35.077 }, 00:06:35.077 { 00:06:35.077 "name": "BaseBdev2", 00:06:35.077 "uuid": "1e5c1fa3-d373-43a3-8a61-6d7725fc4040", 00:06:35.077 "is_configured": true, 00:06:35.077 "data_offset": 0, 00:06:35.077 "data_size": 65536 00:06:35.077 } 00:06:35.077 ] 00:06:35.077 } 00:06:35.077 } 00:06:35.077 }' 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:35.077 BaseBdev2' 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.077 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.337 [2024-11-27 21:38:58.218407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:35.337 [2024-11-27 21:38:58.218435] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:35.337 [2024-11-27 21:38:58.218482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:35.337 "name": "Existed_Raid", 00:06:35.337 "uuid": "0a2a4ac2-081e-49aa-9a6a-d1eaccec0796", 00:06:35.337 "strip_size_kb": 64, 00:06:35.337 "state": "offline", 00:06:35.337 "raid_level": "raid0", 00:06:35.337 "superblock": false, 00:06:35.337 "num_base_bdevs": 2, 00:06:35.337 "num_base_bdevs_discovered": 1, 00:06:35.337 "num_base_bdevs_operational": 1, 00:06:35.337 "base_bdevs_list": [ 00:06:35.337 { 00:06:35.337 "name": null, 00:06:35.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:35.337 "is_configured": false, 00:06:35.337 "data_offset": 0, 00:06:35.337 "data_size": 65536 00:06:35.337 }, 00:06:35.337 { 00:06:35.337 "name": "BaseBdev2", 00:06:35.337 "uuid": "1e5c1fa3-d373-43a3-8a61-6d7725fc4040", 00:06:35.337 "is_configured": true, 00:06:35.337 "data_offset": 0, 00:06:35.337 "data_size": 65536 00:06:35.337 } 00:06:35.337 ] 00:06:35.337 }' 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:35.337 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.597 [2024-11-27 21:38:58.660713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:35.597 [2024-11-27 21:38:58.660843] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.597 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.857 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:35.857 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:35.857 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:35.857 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71863 00:06:35.857 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71863 ']' 00:06:35.857 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71863 00:06:35.857 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:35.857 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.857 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71863 00:06:35.857 killing process with pid 71863 00:06:35.857 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.857 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.857 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71863' 00:06:35.857 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71863 00:06:35.857 [2024-11-27 21:38:58.768258] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:35.857 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71863 00:06:35.857 [2024-11-27 21:38:58.769265] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:36.117 21:38:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:36.117 00:06:36.117 real 0m3.772s 00:06:36.117 user 0m5.987s 00:06:36.117 sys 0m0.720s 00:06:36.117 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.117 ************************************ 00:06:36.117 END TEST raid_state_function_test 00:06:36.117 ************************************ 00:06:36.117 21:38:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.117 21:38:59 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:36.117 21:38:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:36.117 21:38:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.117 21:38:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:36.117 ************************************ 00:06:36.117 START TEST raid_state_function_test_sb 00:06:36.117 ************************************ 00:06:36.117 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:06:36.117 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:36.117 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:36.117 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:36.118 Process raid pid: 72105 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72105 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72105' 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72105 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72105 ']' 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.118 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.118 [2024-11-27 21:38:59.137513] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:36.118 [2024-11-27 21:38:59.138137] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:36.377 [2024-11-27 21:38:59.294873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.377 [2024-11-27 21:38:59.319680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.377 [2024-11-27 21:38:59.361434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.377 [2024-11-27 21:38:59.361550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:36.946 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.946 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:06:36.946 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:36.946 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.946 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.946 [2024-11-27 21:38:59.964180] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:36.946 [2024-11-27 21:38:59.964315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:36.946 [2024-11-27 21:38:59.964389] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:36.946 [2024-11-27 21:38:59.964447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:36.946 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.946 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:36.946 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:36.946 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:36.946 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:36.946 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:36.947 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:36.947 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:36.947 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:36.947 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:36.947 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:36.947 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:36.947 21:38:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:36.947 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.947 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:36.947 21:38:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.947 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:36.947 "name": "Existed_Raid", 00:06:36.947 "uuid": "f02e7ba2-e022-4873-9bea-7d3e42ba0b00", 00:06:36.947 "strip_size_kb": 64, 00:06:36.947 "state": "configuring", 00:06:36.947 "raid_level": "raid0", 00:06:36.947 "superblock": true, 00:06:36.947 "num_base_bdevs": 2, 00:06:36.947 "num_base_bdevs_discovered": 0, 00:06:36.947 "num_base_bdevs_operational": 2, 00:06:36.947 "base_bdevs_list": [ 00:06:36.947 { 00:06:36.947 "name": "BaseBdev1", 00:06:36.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:36.947 "is_configured": false, 00:06:36.947 "data_offset": 0, 00:06:36.947 "data_size": 0 00:06:36.947 }, 00:06:36.947 { 00:06:36.947 "name": "BaseBdev2", 00:06:36.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:36.947 "is_configured": false, 00:06:36.947 "data_offset": 0, 00:06:36.947 "data_size": 0 00:06:36.947 } 00:06:36.947 ] 00:06:36.947 }' 00:06:36.947 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:36.947 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.517 [2024-11-27 21:39:00.387352] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:37.517 [2024-11-27 21:39:00.387439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.517 [2024-11-27 21:39:00.399336] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:37.517 [2024-11-27 21:39:00.399424] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:37.517 [2024-11-27 21:39:00.399440] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:37.517 [2024-11-27 21:39:00.399462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.517 [2024-11-27 21:39:00.420231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:37.517 BaseBdev1 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.517 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.517 [ 00:06:37.517 { 00:06:37.517 "name": "BaseBdev1", 00:06:37.517 "aliases": [ 00:06:37.517 "086d2893-b861-426b-8857-45ea90b08e56" 00:06:37.517 ], 00:06:37.517 "product_name": "Malloc disk", 00:06:37.517 "block_size": 512, 00:06:37.517 "num_blocks": 65536, 00:06:37.517 "uuid": "086d2893-b861-426b-8857-45ea90b08e56", 00:06:37.517 "assigned_rate_limits": { 00:06:37.517 "rw_ios_per_sec": 0, 00:06:37.517 "rw_mbytes_per_sec": 0, 00:06:37.517 "r_mbytes_per_sec": 0, 00:06:37.517 "w_mbytes_per_sec": 0 00:06:37.517 }, 00:06:37.517 "claimed": true, 00:06:37.517 "claim_type": "exclusive_write", 00:06:37.517 "zoned": false, 00:06:37.517 "supported_io_types": { 00:06:37.517 "read": true, 00:06:37.517 "write": true, 00:06:37.517 "unmap": true, 00:06:37.517 "flush": true, 00:06:37.517 "reset": true, 00:06:37.517 "nvme_admin": false, 00:06:37.517 "nvme_io": false, 00:06:37.517 "nvme_io_md": false, 00:06:37.517 "write_zeroes": true, 00:06:37.517 "zcopy": true, 00:06:37.517 "get_zone_info": false, 00:06:37.517 "zone_management": false, 00:06:37.517 "zone_append": false, 00:06:37.517 "compare": false, 00:06:37.517 "compare_and_write": false, 00:06:37.517 "abort": true, 00:06:37.517 "seek_hole": false, 00:06:37.517 "seek_data": false, 00:06:37.518 "copy": true, 00:06:37.518 "nvme_iov_md": false 00:06:37.518 }, 00:06:37.518 "memory_domains": [ 00:06:37.518 { 00:06:37.518 "dma_device_id": "system", 00:06:37.518 "dma_device_type": 1 00:06:37.518 }, 00:06:37.518 { 00:06:37.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.518 "dma_device_type": 2 00:06:37.518 } 00:06:37.518 ], 00:06:37.518 "driver_specific": {} 00:06:37.518 } 00:06:37.518 ] 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:37.518 "name": "Existed_Raid", 00:06:37.518 "uuid": "85174c2f-fdff-41be-a954-004fa22d4b61", 00:06:37.518 "strip_size_kb": 64, 00:06:37.518 "state": "configuring", 00:06:37.518 "raid_level": "raid0", 00:06:37.518 "superblock": true, 00:06:37.518 "num_base_bdevs": 2, 00:06:37.518 "num_base_bdevs_discovered": 1, 00:06:37.518 "num_base_bdevs_operational": 2, 00:06:37.518 "base_bdevs_list": [ 00:06:37.518 { 00:06:37.518 "name": "BaseBdev1", 00:06:37.518 "uuid": "086d2893-b861-426b-8857-45ea90b08e56", 00:06:37.518 "is_configured": true, 00:06:37.518 "data_offset": 2048, 00:06:37.518 "data_size": 63488 00:06:37.518 }, 00:06:37.518 { 00:06:37.518 "name": "BaseBdev2", 00:06:37.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:37.518 "is_configured": false, 00:06:37.518 "data_offset": 0, 00:06:37.518 "data_size": 0 00:06:37.518 } 00:06:37.518 ] 00:06:37.518 }' 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:37.518 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.778 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:37.778 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.778 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.778 [2024-11-27 21:39:00.887493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:37.778 [2024-11-27 21:39:00.887622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:37.778 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.778 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:37.778 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.778 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:37.778 [2024-11-27 21:39:00.895501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:37.778 [2024-11-27 21:39:00.897446] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:37.778 [2024-11-27 21:39:00.897482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.038 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.038 "name": "Existed_Raid", 00:06:38.038 "uuid": "96a52e42-5e03-4e2c-82a7-c17e9ae411c3", 00:06:38.038 "strip_size_kb": 64, 00:06:38.038 "state": "configuring", 00:06:38.038 "raid_level": "raid0", 00:06:38.038 "superblock": true, 00:06:38.038 "num_base_bdevs": 2, 00:06:38.038 "num_base_bdevs_discovered": 1, 00:06:38.038 "num_base_bdevs_operational": 2, 00:06:38.038 "base_bdevs_list": [ 00:06:38.038 { 00:06:38.038 "name": "BaseBdev1", 00:06:38.038 "uuid": "086d2893-b861-426b-8857-45ea90b08e56", 00:06:38.038 "is_configured": true, 00:06:38.038 "data_offset": 2048, 00:06:38.038 "data_size": 63488 00:06:38.038 }, 00:06:38.038 { 00:06:38.038 "name": "BaseBdev2", 00:06:38.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:38.039 "is_configured": false, 00:06:38.039 "data_offset": 0, 00:06:38.039 "data_size": 0 00:06:38.039 } 00:06:38.039 ] 00:06:38.039 }' 00:06:38.039 21:39:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.039 21:39:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.298 [2024-11-27 21:39:01.293556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:38.298 [2024-11-27 21:39:01.293855] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:38.298 [2024-11-27 21:39:01.293908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:38.298 BaseBdev2 00:06:38.298 [2024-11-27 21:39:01.294227] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:38.298 [2024-11-27 21:39:01.294416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:38.298 [2024-11-27 21:39:01.294478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:38.298 [2024-11-27 21:39:01.294615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.298 [ 00:06:38.298 { 00:06:38.298 "name": "BaseBdev2", 00:06:38.298 "aliases": [ 00:06:38.298 "78f6accb-1bc3-4b15-bdae-d17b8c6da5b8" 00:06:38.298 ], 00:06:38.298 "product_name": "Malloc disk", 00:06:38.298 "block_size": 512, 00:06:38.298 "num_blocks": 65536, 00:06:38.298 "uuid": "78f6accb-1bc3-4b15-bdae-d17b8c6da5b8", 00:06:38.298 "assigned_rate_limits": { 00:06:38.298 "rw_ios_per_sec": 0, 00:06:38.298 "rw_mbytes_per_sec": 0, 00:06:38.298 "r_mbytes_per_sec": 0, 00:06:38.298 "w_mbytes_per_sec": 0 00:06:38.298 }, 00:06:38.298 "claimed": true, 00:06:38.298 "claim_type": "exclusive_write", 00:06:38.298 "zoned": false, 00:06:38.298 "supported_io_types": { 00:06:38.298 "read": true, 00:06:38.298 "write": true, 00:06:38.298 "unmap": true, 00:06:38.298 "flush": true, 00:06:38.298 "reset": true, 00:06:38.298 "nvme_admin": false, 00:06:38.298 "nvme_io": false, 00:06:38.298 "nvme_io_md": false, 00:06:38.298 "write_zeroes": true, 00:06:38.298 "zcopy": true, 00:06:38.298 "get_zone_info": false, 00:06:38.298 "zone_management": false, 00:06:38.298 "zone_append": false, 00:06:38.298 "compare": false, 00:06:38.298 "compare_and_write": false, 00:06:38.298 "abort": true, 00:06:38.298 "seek_hole": false, 00:06:38.298 "seek_data": false, 00:06:38.298 "copy": true, 00:06:38.298 "nvme_iov_md": false 00:06:38.298 }, 00:06:38.298 "memory_domains": [ 00:06:38.298 { 00:06:38.298 "dma_device_id": "system", 00:06:38.298 "dma_device_type": 1 00:06:38.298 }, 00:06:38.298 { 00:06:38.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.298 "dma_device_type": 2 00:06:38.298 } 00:06:38.298 ], 00:06:38.298 "driver_specific": {} 00:06:38.298 } 00:06:38.298 ] 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:38.298 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.299 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.299 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:38.299 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.299 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:38.299 "name": "Existed_Raid", 00:06:38.299 "uuid": "96a52e42-5e03-4e2c-82a7-c17e9ae411c3", 00:06:38.299 "strip_size_kb": 64, 00:06:38.299 "state": "online", 00:06:38.299 "raid_level": "raid0", 00:06:38.299 "superblock": true, 00:06:38.299 "num_base_bdevs": 2, 00:06:38.299 "num_base_bdevs_discovered": 2, 00:06:38.299 "num_base_bdevs_operational": 2, 00:06:38.299 "base_bdevs_list": [ 00:06:38.299 { 00:06:38.299 "name": "BaseBdev1", 00:06:38.299 "uuid": "086d2893-b861-426b-8857-45ea90b08e56", 00:06:38.299 "is_configured": true, 00:06:38.299 "data_offset": 2048, 00:06:38.299 "data_size": 63488 00:06:38.299 }, 00:06:38.299 { 00:06:38.299 "name": "BaseBdev2", 00:06:38.299 "uuid": "78f6accb-1bc3-4b15-bdae-d17b8c6da5b8", 00:06:38.299 "is_configured": true, 00:06:38.299 "data_offset": 2048, 00:06:38.299 "data_size": 63488 00:06:38.299 } 00:06:38.299 ] 00:06:38.299 }' 00:06:38.299 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:38.299 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.867 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:38.867 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:38.867 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:38.867 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.868 [2024-11-27 21:39:01.749148] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:38.868 "name": "Existed_Raid", 00:06:38.868 "aliases": [ 00:06:38.868 "96a52e42-5e03-4e2c-82a7-c17e9ae411c3" 00:06:38.868 ], 00:06:38.868 "product_name": "Raid Volume", 00:06:38.868 "block_size": 512, 00:06:38.868 "num_blocks": 126976, 00:06:38.868 "uuid": "96a52e42-5e03-4e2c-82a7-c17e9ae411c3", 00:06:38.868 "assigned_rate_limits": { 00:06:38.868 "rw_ios_per_sec": 0, 00:06:38.868 "rw_mbytes_per_sec": 0, 00:06:38.868 "r_mbytes_per_sec": 0, 00:06:38.868 "w_mbytes_per_sec": 0 00:06:38.868 }, 00:06:38.868 "claimed": false, 00:06:38.868 "zoned": false, 00:06:38.868 "supported_io_types": { 00:06:38.868 "read": true, 00:06:38.868 "write": true, 00:06:38.868 "unmap": true, 00:06:38.868 "flush": true, 00:06:38.868 "reset": true, 00:06:38.868 "nvme_admin": false, 00:06:38.868 "nvme_io": false, 00:06:38.868 "nvme_io_md": false, 00:06:38.868 "write_zeroes": true, 00:06:38.868 "zcopy": false, 00:06:38.868 "get_zone_info": false, 00:06:38.868 "zone_management": false, 00:06:38.868 "zone_append": false, 00:06:38.868 "compare": false, 00:06:38.868 "compare_and_write": false, 00:06:38.868 "abort": false, 00:06:38.868 "seek_hole": false, 00:06:38.868 "seek_data": false, 00:06:38.868 "copy": false, 00:06:38.868 "nvme_iov_md": false 00:06:38.868 }, 00:06:38.868 "memory_domains": [ 00:06:38.868 { 00:06:38.868 "dma_device_id": "system", 00:06:38.868 "dma_device_type": 1 00:06:38.868 }, 00:06:38.868 { 00:06:38.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.868 "dma_device_type": 2 00:06:38.868 }, 00:06:38.868 { 00:06:38.868 "dma_device_id": "system", 00:06:38.868 "dma_device_type": 1 00:06:38.868 }, 00:06:38.868 { 00:06:38.868 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.868 "dma_device_type": 2 00:06:38.868 } 00:06:38.868 ], 00:06:38.868 "driver_specific": { 00:06:38.868 "raid": { 00:06:38.868 "uuid": "96a52e42-5e03-4e2c-82a7-c17e9ae411c3", 00:06:38.868 "strip_size_kb": 64, 00:06:38.868 "state": "online", 00:06:38.868 "raid_level": "raid0", 00:06:38.868 "superblock": true, 00:06:38.868 "num_base_bdevs": 2, 00:06:38.868 "num_base_bdevs_discovered": 2, 00:06:38.868 "num_base_bdevs_operational": 2, 00:06:38.868 "base_bdevs_list": [ 00:06:38.868 { 00:06:38.868 "name": "BaseBdev1", 00:06:38.868 "uuid": "086d2893-b861-426b-8857-45ea90b08e56", 00:06:38.868 "is_configured": true, 00:06:38.868 "data_offset": 2048, 00:06:38.868 "data_size": 63488 00:06:38.868 }, 00:06:38.868 { 00:06:38.868 "name": "BaseBdev2", 00:06:38.868 "uuid": "78f6accb-1bc3-4b15-bdae-d17b8c6da5b8", 00:06:38.868 "is_configured": true, 00:06:38.868 "data_offset": 2048, 00:06:38.868 "data_size": 63488 00:06:38.868 } 00:06:38.868 ] 00:06:38.868 } 00:06:38.868 } 00:06:38.868 }' 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:38.868 BaseBdev2' 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.868 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:38.868 [2024-11-27 21:39:01.976505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:38.868 [2024-11-27 21:39:01.976590] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:38.868 [2024-11-27 21:39:01.976726] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.127 21:39:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.127 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.127 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:39.127 "name": "Existed_Raid", 00:06:39.127 "uuid": "96a52e42-5e03-4e2c-82a7-c17e9ae411c3", 00:06:39.127 "strip_size_kb": 64, 00:06:39.127 "state": "offline", 00:06:39.127 "raid_level": "raid0", 00:06:39.127 "superblock": true, 00:06:39.127 "num_base_bdevs": 2, 00:06:39.127 "num_base_bdevs_discovered": 1, 00:06:39.127 "num_base_bdevs_operational": 1, 00:06:39.127 "base_bdevs_list": [ 00:06:39.127 { 00:06:39.127 "name": null, 00:06:39.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:39.127 "is_configured": false, 00:06:39.127 "data_offset": 0, 00:06:39.127 "data_size": 63488 00:06:39.127 }, 00:06:39.127 { 00:06:39.127 "name": "BaseBdev2", 00:06:39.127 "uuid": "78f6accb-1bc3-4b15-bdae-d17b8c6da5b8", 00:06:39.127 "is_configured": true, 00:06:39.127 "data_offset": 2048, 00:06:39.127 "data_size": 63488 00:06:39.127 } 00:06:39.127 ] 00:06:39.127 }' 00:06:39.127 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:39.127 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.387 [2024-11-27 21:39:02.443240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:39.387 [2024-11-27 21:39:02.443292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72105 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72105 ']' 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72105 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.387 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72105 00:06:39.647 killing process with pid 72105 00:06:39.647 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.647 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.647 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72105' 00:06:39.647 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72105 00:06:39.647 [2024-11-27 21:39:02.539836] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:39.647 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72105 00:06:39.647 [2024-11-27 21:39:02.540899] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:39.647 21:39:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:39.647 00:06:39.647 real 0m3.705s 00:06:39.647 user 0m5.824s 00:06:39.647 sys 0m0.748s 00:06:39.647 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.647 21:39:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:39.647 ************************************ 00:06:39.647 END TEST raid_state_function_test_sb 00:06:39.647 ************************************ 00:06:39.907 21:39:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:06:39.907 21:39:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:39.907 21:39:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.907 21:39:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 ************************************ 00:06:39.907 START TEST raid_superblock_test 00:06:39.907 ************************************ 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72340 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72340 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72340 ']' 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.907 21:39:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:39.907 [2024-11-27 21:39:02.901504] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:39.907 [2024-11-27 21:39:02.901643] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72340 ] 00:06:40.168 [2024-11-27 21:39:03.056908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.168 [2024-11-27 21:39:03.082057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.168 [2024-11-27 21:39:03.123562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.168 [2024-11-27 21:39:03.123600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.740 malloc1 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.740 [2024-11-27 21:39:03.742554] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:40.740 [2024-11-27 21:39:03.742666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:40.740 [2024-11-27 21:39:03.742736] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:40.740 [2024-11-27 21:39:03.742791] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:40.740 [2024-11-27 21:39:03.744925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:40.740 [2024-11-27 21:39:03.745013] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:40.740 pt1 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.740 malloc2 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.740 [2024-11-27 21:39:03.770995] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:40.740 [2024-11-27 21:39:03.771093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:40.740 [2024-11-27 21:39:03.771145] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:40.740 [2024-11-27 21:39:03.771194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:40.740 [2024-11-27 21:39:03.773322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:40.740 [2024-11-27 21:39:03.773399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:40.740 pt2 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.740 [2024-11-27 21:39:03.783009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:40.740 [2024-11-27 21:39:03.784889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:40.740 [2024-11-27 21:39:03.785085] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:40.740 [2024-11-27 21:39:03.785138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:40.740 [2024-11-27 21:39:03.785473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:40.740 [2024-11-27 21:39:03.785664] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:40.740 [2024-11-27 21:39:03.785711] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:06:40.740 [2024-11-27 21:39:03.785944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.740 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:40.741 "name": "raid_bdev1", 00:06:40.741 "uuid": "d82da3cd-5822-4219-b81f-35bba31ad268", 00:06:40.741 "strip_size_kb": 64, 00:06:40.741 "state": "online", 00:06:40.741 "raid_level": "raid0", 00:06:40.741 "superblock": true, 00:06:40.741 "num_base_bdevs": 2, 00:06:40.741 "num_base_bdevs_discovered": 2, 00:06:40.741 "num_base_bdevs_operational": 2, 00:06:40.741 "base_bdevs_list": [ 00:06:40.741 { 00:06:40.741 "name": "pt1", 00:06:40.741 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:40.741 "is_configured": true, 00:06:40.741 "data_offset": 2048, 00:06:40.741 "data_size": 63488 00:06:40.741 }, 00:06:40.741 { 00:06:40.741 "name": "pt2", 00:06:40.741 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:40.741 "is_configured": true, 00:06:40.741 "data_offset": 2048, 00:06:40.741 "data_size": 63488 00:06:40.741 } 00:06:40.741 ] 00:06:40.741 }' 00:06:40.741 21:39:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:40.741 21:39:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:41.312 [2024-11-27 21:39:04.222578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:41.312 "name": "raid_bdev1", 00:06:41.312 "aliases": [ 00:06:41.312 "d82da3cd-5822-4219-b81f-35bba31ad268" 00:06:41.312 ], 00:06:41.312 "product_name": "Raid Volume", 00:06:41.312 "block_size": 512, 00:06:41.312 "num_blocks": 126976, 00:06:41.312 "uuid": "d82da3cd-5822-4219-b81f-35bba31ad268", 00:06:41.312 "assigned_rate_limits": { 00:06:41.312 "rw_ios_per_sec": 0, 00:06:41.312 "rw_mbytes_per_sec": 0, 00:06:41.312 "r_mbytes_per_sec": 0, 00:06:41.312 "w_mbytes_per_sec": 0 00:06:41.312 }, 00:06:41.312 "claimed": false, 00:06:41.312 "zoned": false, 00:06:41.312 "supported_io_types": { 00:06:41.312 "read": true, 00:06:41.312 "write": true, 00:06:41.312 "unmap": true, 00:06:41.312 "flush": true, 00:06:41.312 "reset": true, 00:06:41.312 "nvme_admin": false, 00:06:41.312 "nvme_io": false, 00:06:41.312 "nvme_io_md": false, 00:06:41.312 "write_zeroes": true, 00:06:41.312 "zcopy": false, 00:06:41.312 "get_zone_info": false, 00:06:41.312 "zone_management": false, 00:06:41.312 "zone_append": false, 00:06:41.312 "compare": false, 00:06:41.312 "compare_and_write": false, 00:06:41.312 "abort": false, 00:06:41.312 "seek_hole": false, 00:06:41.312 "seek_data": false, 00:06:41.312 "copy": false, 00:06:41.312 "nvme_iov_md": false 00:06:41.312 }, 00:06:41.312 "memory_domains": [ 00:06:41.312 { 00:06:41.312 "dma_device_id": "system", 00:06:41.312 "dma_device_type": 1 00:06:41.312 }, 00:06:41.312 { 00:06:41.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.312 "dma_device_type": 2 00:06:41.312 }, 00:06:41.312 { 00:06:41.312 "dma_device_id": "system", 00:06:41.312 "dma_device_type": 1 00:06:41.312 }, 00:06:41.312 { 00:06:41.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:41.312 "dma_device_type": 2 00:06:41.312 } 00:06:41.312 ], 00:06:41.312 "driver_specific": { 00:06:41.312 "raid": { 00:06:41.312 "uuid": "d82da3cd-5822-4219-b81f-35bba31ad268", 00:06:41.312 "strip_size_kb": 64, 00:06:41.312 "state": "online", 00:06:41.312 "raid_level": "raid0", 00:06:41.312 "superblock": true, 00:06:41.312 "num_base_bdevs": 2, 00:06:41.312 "num_base_bdevs_discovered": 2, 00:06:41.312 "num_base_bdevs_operational": 2, 00:06:41.312 "base_bdevs_list": [ 00:06:41.312 { 00:06:41.312 "name": "pt1", 00:06:41.312 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:41.312 "is_configured": true, 00:06:41.312 "data_offset": 2048, 00:06:41.312 "data_size": 63488 00:06:41.312 }, 00:06:41.312 { 00:06:41.312 "name": "pt2", 00:06:41.312 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:41.312 "is_configured": true, 00:06:41.312 "data_offset": 2048, 00:06:41.312 "data_size": 63488 00:06:41.312 } 00:06:41.312 ] 00:06:41.312 } 00:06:41.312 } 00:06:41.312 }' 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:41.312 pt2' 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:41.312 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.573 [2024-11-27 21:39:04.438139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d82da3cd-5822-4219-b81f-35bba31ad268 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d82da3cd-5822-4219-b81f-35bba31ad268 ']' 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.573 [2024-11-27 21:39:04.477886] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:41.573 [2024-11-27 21:39:04.477950] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:41.573 [2024-11-27 21:39:04.478062] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:41.573 [2024-11-27 21:39:04.478170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:41.573 [2024-11-27 21:39:04.478222] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.573 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.574 [2024-11-27 21:39:04.609676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:41.574 [2024-11-27 21:39:04.611531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:41.574 [2024-11-27 21:39:04.611671] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:41.574 [2024-11-27 21:39:04.611804] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:41.574 [2024-11-27 21:39:04.611881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:41.574 [2024-11-27 21:39:04.611933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:06:41.574 request: 00:06:41.574 { 00:06:41.574 "name": "raid_bdev1", 00:06:41.574 "raid_level": "raid0", 00:06:41.574 "base_bdevs": [ 00:06:41.574 "malloc1", 00:06:41.574 "malloc2" 00:06:41.574 ], 00:06:41.574 "strip_size_kb": 64, 00:06:41.574 "superblock": false, 00:06:41.574 "method": "bdev_raid_create", 00:06:41.574 "req_id": 1 00:06:41.574 } 00:06:41.574 Got JSON-RPC error response 00:06:41.574 response: 00:06:41.574 { 00:06:41.574 "code": -17, 00:06:41.574 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:41.574 } 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.574 [2024-11-27 21:39:04.677537] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:41.574 [2024-11-27 21:39:04.677629] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:41.574 [2024-11-27 21:39:04.677682] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:41.574 [2024-11-27 21:39:04.677729] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:41.574 [2024-11-27 21:39:04.679806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:41.574 [2024-11-27 21:39:04.679871] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:41.574 [2024-11-27 21:39:04.679966] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:41.574 [2024-11-27 21:39:04.680042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:41.574 pt1 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.574 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.834 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.834 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:41.834 "name": "raid_bdev1", 00:06:41.834 "uuid": "d82da3cd-5822-4219-b81f-35bba31ad268", 00:06:41.834 "strip_size_kb": 64, 00:06:41.834 "state": "configuring", 00:06:41.834 "raid_level": "raid0", 00:06:41.834 "superblock": true, 00:06:41.834 "num_base_bdevs": 2, 00:06:41.834 "num_base_bdevs_discovered": 1, 00:06:41.834 "num_base_bdevs_operational": 2, 00:06:41.834 "base_bdevs_list": [ 00:06:41.834 { 00:06:41.834 "name": "pt1", 00:06:41.834 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:41.834 "is_configured": true, 00:06:41.834 "data_offset": 2048, 00:06:41.834 "data_size": 63488 00:06:41.834 }, 00:06:41.834 { 00:06:41.834 "name": null, 00:06:41.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:41.834 "is_configured": false, 00:06:41.834 "data_offset": 2048, 00:06:41.834 "data_size": 63488 00:06:41.834 } 00:06:41.834 ] 00:06:41.834 }' 00:06:41.834 21:39:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:41.834 21:39:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.094 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:42.094 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:42.094 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:42.094 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:42.094 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.094 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.094 [2024-11-27 21:39:05.140806] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:42.094 [2024-11-27 21:39:05.140922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:42.094 [2024-11-27 21:39:05.141004] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:42.094 [2024-11-27 21:39:05.141043] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:42.094 [2024-11-27 21:39:05.141497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:42.094 [2024-11-27 21:39:05.141570] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:42.094 [2024-11-27 21:39:05.141705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:42.094 [2024-11-27 21:39:05.141771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:42.094 [2024-11-27 21:39:05.141929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:42.094 [2024-11-27 21:39:05.141971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:42.094 [2024-11-27 21:39:05.142266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:06:42.094 [2024-11-27 21:39:05.142424] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:42.094 [2024-11-27 21:39:05.142474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:42.094 [2024-11-27 21:39:05.142652] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.094 pt2 00:06:42.094 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.094 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:42.094 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:42.094 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:42.094 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:42.094 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:42.094 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:42.094 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:42.095 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:42.095 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:42.095 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:42.095 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:42.095 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:42.095 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:42.095 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:42.095 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.095 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.095 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.095 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:42.095 "name": "raid_bdev1", 00:06:42.095 "uuid": "d82da3cd-5822-4219-b81f-35bba31ad268", 00:06:42.095 "strip_size_kb": 64, 00:06:42.095 "state": "online", 00:06:42.095 "raid_level": "raid0", 00:06:42.095 "superblock": true, 00:06:42.095 "num_base_bdevs": 2, 00:06:42.095 "num_base_bdevs_discovered": 2, 00:06:42.095 "num_base_bdevs_operational": 2, 00:06:42.095 "base_bdevs_list": [ 00:06:42.095 { 00:06:42.095 "name": "pt1", 00:06:42.095 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:42.095 "is_configured": true, 00:06:42.095 "data_offset": 2048, 00:06:42.095 "data_size": 63488 00:06:42.095 }, 00:06:42.095 { 00:06:42.095 "name": "pt2", 00:06:42.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:42.095 "is_configured": true, 00:06:42.095 "data_offset": 2048, 00:06:42.095 "data_size": 63488 00:06:42.095 } 00:06:42.095 ] 00:06:42.095 }' 00:06:42.095 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:42.095 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.664 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:06:42.664 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:42.664 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:42.664 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:42.664 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:42.664 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:42.664 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:42.664 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:42.664 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.664 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.664 [2024-11-27 21:39:05.536394] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.664 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.664 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:42.664 "name": "raid_bdev1", 00:06:42.664 "aliases": [ 00:06:42.664 "d82da3cd-5822-4219-b81f-35bba31ad268" 00:06:42.664 ], 00:06:42.664 "product_name": "Raid Volume", 00:06:42.664 "block_size": 512, 00:06:42.664 "num_blocks": 126976, 00:06:42.664 "uuid": "d82da3cd-5822-4219-b81f-35bba31ad268", 00:06:42.664 "assigned_rate_limits": { 00:06:42.664 "rw_ios_per_sec": 0, 00:06:42.664 "rw_mbytes_per_sec": 0, 00:06:42.664 "r_mbytes_per_sec": 0, 00:06:42.664 "w_mbytes_per_sec": 0 00:06:42.664 }, 00:06:42.664 "claimed": false, 00:06:42.664 "zoned": false, 00:06:42.664 "supported_io_types": { 00:06:42.664 "read": true, 00:06:42.664 "write": true, 00:06:42.664 "unmap": true, 00:06:42.664 "flush": true, 00:06:42.664 "reset": true, 00:06:42.664 "nvme_admin": false, 00:06:42.664 "nvme_io": false, 00:06:42.664 "nvme_io_md": false, 00:06:42.664 "write_zeroes": true, 00:06:42.665 "zcopy": false, 00:06:42.665 "get_zone_info": false, 00:06:42.665 "zone_management": false, 00:06:42.665 "zone_append": false, 00:06:42.665 "compare": false, 00:06:42.665 "compare_and_write": false, 00:06:42.665 "abort": false, 00:06:42.665 "seek_hole": false, 00:06:42.665 "seek_data": false, 00:06:42.665 "copy": false, 00:06:42.665 "nvme_iov_md": false 00:06:42.665 }, 00:06:42.665 "memory_domains": [ 00:06:42.665 { 00:06:42.665 "dma_device_id": "system", 00:06:42.665 "dma_device_type": 1 00:06:42.665 }, 00:06:42.665 { 00:06:42.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:42.665 "dma_device_type": 2 00:06:42.665 }, 00:06:42.665 { 00:06:42.665 "dma_device_id": "system", 00:06:42.665 "dma_device_type": 1 00:06:42.665 }, 00:06:42.665 { 00:06:42.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:42.665 "dma_device_type": 2 00:06:42.665 } 00:06:42.665 ], 00:06:42.665 "driver_specific": { 00:06:42.665 "raid": { 00:06:42.665 "uuid": "d82da3cd-5822-4219-b81f-35bba31ad268", 00:06:42.665 "strip_size_kb": 64, 00:06:42.665 "state": "online", 00:06:42.665 "raid_level": "raid0", 00:06:42.665 "superblock": true, 00:06:42.665 "num_base_bdevs": 2, 00:06:42.665 "num_base_bdevs_discovered": 2, 00:06:42.665 "num_base_bdevs_operational": 2, 00:06:42.665 "base_bdevs_list": [ 00:06:42.665 { 00:06:42.665 "name": "pt1", 00:06:42.665 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:42.665 "is_configured": true, 00:06:42.665 "data_offset": 2048, 00:06:42.665 "data_size": 63488 00:06:42.665 }, 00:06:42.665 { 00:06:42.665 "name": "pt2", 00:06:42.665 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:42.665 "is_configured": true, 00:06:42.665 "data_offset": 2048, 00:06:42.665 "data_size": 63488 00:06:42.665 } 00:06:42.665 ] 00:06:42.665 } 00:06:42.665 } 00:06:42.665 }' 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:42.665 pt2' 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:06:42.665 [2024-11-27 21:39:05.740000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d82da3cd-5822-4219-b81f-35bba31ad268 '!=' d82da3cd-5822-4219-b81f-35bba31ad268 ']' 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72340 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72340 ']' 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72340 00:06:42.665 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:42.926 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.926 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72340 00:06:42.926 killing process with pid 72340 00:06:42.926 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.926 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.926 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72340' 00:06:42.926 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72340 00:06:42.926 [2024-11-27 21:39:05.820972] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:42.926 [2024-11-27 21:39:05.821046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:42.926 [2024-11-27 21:39:05.821093] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:42.926 [2024-11-27 21:39:05.821102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:06:42.926 21:39:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72340 00:06:42.926 [2024-11-27 21:39:05.843279] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:43.186 21:39:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:06:43.186 00:06:43.186 real 0m3.234s 00:06:43.186 user 0m5.020s 00:06:43.186 sys 0m0.659s 00:06:43.186 ************************************ 00:06:43.186 END TEST raid_superblock_test 00:06:43.186 ************************************ 00:06:43.186 21:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.186 21:39:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.186 21:39:06 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:06:43.186 21:39:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:43.186 21:39:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.186 21:39:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:43.186 ************************************ 00:06:43.186 START TEST raid_read_error_test 00:06:43.186 ************************************ 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.48WKAVjVS8 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72541 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72541 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72541 ']' 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.186 21:39:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.186 [2024-11-27 21:39:06.217825] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:43.186 [2024-11-27 21:39:06.218029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72541 ] 00:06:43.446 [2024-11-27 21:39:06.373668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.446 [2024-11-27 21:39:06.398546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.446 [2024-11-27 21:39:06.440124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.446 [2024-11-27 21:39:06.440260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.019 BaseBdev1_malloc 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.019 true 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.019 [2024-11-27 21:39:07.082927] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:44.019 [2024-11-27 21:39:07.083048] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.019 [2024-11-27 21:39:07.083125] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:06:44.019 [2024-11-27 21:39:07.083139] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.019 [2024-11-27 21:39:07.085279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.019 [2024-11-27 21:39:07.085314] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:44.019 BaseBdev1 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.019 BaseBdev2_malloc 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.019 true 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.019 [2024-11-27 21:39:07.123469] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:44.019 [2024-11-27 21:39:07.123561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:44.019 [2024-11-27 21:39:07.123603] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:44.019 [2024-11-27 21:39:07.123660] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:44.019 [2024-11-27 21:39:07.125833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:44.019 [2024-11-27 21:39:07.125909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:44.019 BaseBdev2 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.019 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.019 [2024-11-27 21:39:07.135508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:44.019 [2024-11-27 21:39:07.137509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:44.019 [2024-11-27 21:39:07.137823] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:44.019 [2024-11-27 21:39:07.137875] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:44.019 [2024-11-27 21:39:07.138222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:06:44.019 [2024-11-27 21:39:07.138429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:44.019 [2024-11-27 21:39:07.138482] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:44.019 [2024-11-27 21:39:07.138747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:44.278 "name": "raid_bdev1", 00:06:44.278 "uuid": "f28e03b8-f084-4ea5-8273-81e316a3f95a", 00:06:44.278 "strip_size_kb": 64, 00:06:44.278 "state": "online", 00:06:44.278 "raid_level": "raid0", 00:06:44.278 "superblock": true, 00:06:44.278 "num_base_bdevs": 2, 00:06:44.278 "num_base_bdevs_discovered": 2, 00:06:44.278 "num_base_bdevs_operational": 2, 00:06:44.278 "base_bdevs_list": [ 00:06:44.278 { 00:06:44.278 "name": "BaseBdev1", 00:06:44.278 "uuid": "2512e6f9-f23e-5a62-bc05-040266809751", 00:06:44.278 "is_configured": true, 00:06:44.278 "data_offset": 2048, 00:06:44.278 "data_size": 63488 00:06:44.278 }, 00:06:44.278 { 00:06:44.278 "name": "BaseBdev2", 00:06:44.278 "uuid": "ef03125c-eae5-58ee-8f01-7c6b11a769d0", 00:06:44.278 "is_configured": true, 00:06:44.278 "data_offset": 2048, 00:06:44.278 "data_size": 63488 00:06:44.278 } 00:06:44.278 ] 00:06:44.278 }' 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:44.278 21:39:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:44.538 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:44.538 21:39:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:44.538 [2024-11-27 21:39:07.655007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.478 21:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.751 21:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.751 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:45.751 "name": "raid_bdev1", 00:06:45.751 "uuid": "f28e03b8-f084-4ea5-8273-81e316a3f95a", 00:06:45.751 "strip_size_kb": 64, 00:06:45.751 "state": "online", 00:06:45.751 "raid_level": "raid0", 00:06:45.751 "superblock": true, 00:06:45.751 "num_base_bdevs": 2, 00:06:45.751 "num_base_bdevs_discovered": 2, 00:06:45.751 "num_base_bdevs_operational": 2, 00:06:45.751 "base_bdevs_list": [ 00:06:45.751 { 00:06:45.751 "name": "BaseBdev1", 00:06:45.751 "uuid": "2512e6f9-f23e-5a62-bc05-040266809751", 00:06:45.751 "is_configured": true, 00:06:45.751 "data_offset": 2048, 00:06:45.751 "data_size": 63488 00:06:45.751 }, 00:06:45.751 { 00:06:45.751 "name": "BaseBdev2", 00:06:45.751 "uuid": "ef03125c-eae5-58ee-8f01-7c6b11a769d0", 00:06:45.751 "is_configured": true, 00:06:45.751 "data_offset": 2048, 00:06:45.751 "data_size": 63488 00:06:45.751 } 00:06:45.751 ] 00:06:45.751 }' 00:06:45.751 21:39:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:45.751 21:39:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.024 21:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:46.024 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.024 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.025 [2024-11-27 21:39:09.022982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:46.025 [2024-11-27 21:39:09.023059] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:46.025 [2024-11-27 21:39:09.025693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:46.025 [2024-11-27 21:39:09.025807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.025 [2024-11-27 21:39:09.025854] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:46.025 [2024-11-27 21:39:09.025863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:06:46.025 { 00:06:46.025 "results": [ 00:06:46.025 { 00:06:46.025 "job": "raid_bdev1", 00:06:46.025 "core_mask": "0x1", 00:06:46.025 "workload": "randrw", 00:06:46.025 "percentage": 50, 00:06:46.025 "status": "finished", 00:06:46.025 "queue_depth": 1, 00:06:46.025 "io_size": 131072, 00:06:46.025 "runtime": 1.368955, 00:06:46.025 "iops": 16769.725812754983, 00:06:46.025 "mibps": 2096.215726594373, 00:06:46.025 "io_failed": 1, 00:06:46.025 "io_timeout": 0, 00:06:46.025 "avg_latency_us": 82.20945375473953, 00:06:46.025 "min_latency_us": 25.2646288209607, 00:06:46.025 "max_latency_us": 1438.071615720524 00:06:46.025 } 00:06:46.025 ], 00:06:46.025 "core_count": 1 00:06:46.025 } 00:06:46.025 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.025 21:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72541 00:06:46.025 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72541 ']' 00:06:46.025 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72541 00:06:46.025 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:06:46.025 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.025 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72541 00:06:46.025 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.025 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.025 killing process with pid 72541 00:06:46.025 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72541' 00:06:46.025 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72541 00:06:46.025 [2024-11-27 21:39:09.070050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:46.025 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72541 00:06:46.025 [2024-11-27 21:39:09.086348] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:46.284 21:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:46.284 21:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.48WKAVjVS8 00:06:46.284 21:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:46.284 21:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:06:46.284 21:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:46.284 21:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:46.284 21:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:46.284 ************************************ 00:06:46.284 END TEST raid_read_error_test 00:06:46.284 ************************************ 00:06:46.284 21:39:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:06:46.284 00:06:46.284 real 0m3.180s 00:06:46.284 user 0m4.041s 00:06:46.284 sys 0m0.508s 00:06:46.284 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.284 21:39:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.284 21:39:09 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:06:46.284 21:39:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:46.284 21:39:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.284 21:39:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:46.284 ************************************ 00:06:46.284 START TEST raid_write_error_test 00:06:46.284 ************************************ 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9IuKb1ieKQ 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72670 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72670 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72670 ']' 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.284 21:39:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.544 [2024-11-27 21:39:09.464320] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:46.544 [2024-11-27 21:39:09.464534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72670 ] 00:06:46.544 [2024-11-27 21:39:09.619866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.544 [2024-11-27 21:39:09.644258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.803 [2024-11-27 21:39:09.686526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.803 [2024-11-27 21:39:09.686667] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.372 BaseBdev1_malloc 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.372 true 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.372 [2024-11-27 21:39:10.326360] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:06:47.372 [2024-11-27 21:39:10.326463] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.372 [2024-11-27 21:39:10.326540] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:06:47.372 [2024-11-27 21:39:10.326577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.372 [2024-11-27 21:39:10.328680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.372 [2024-11-27 21:39:10.328772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:06:47.372 BaseBdev1 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.372 BaseBdev2_malloc 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.372 true 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.372 [2024-11-27 21:39:10.366705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:06:47.372 [2024-11-27 21:39:10.366789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:47.372 [2024-11-27 21:39:10.366836] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:06:47.372 [2024-11-27 21:39:10.366855] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:47.372 [2024-11-27 21:39:10.368908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:47.372 [2024-11-27 21:39:10.368942] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:06:47.372 BaseBdev2 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.372 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.373 [2024-11-27 21:39:10.378715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:47.373 [2024-11-27 21:39:10.380536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:47.373 [2024-11-27 21:39:10.380772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:47.373 [2024-11-27 21:39:10.380841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:47.373 [2024-11-27 21:39:10.381144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:06:47.373 [2024-11-27 21:39:10.381355] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:47.373 [2024-11-27 21:39:10.381405] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:47.373 [2024-11-27 21:39:10.381613] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:47.373 "name": "raid_bdev1", 00:06:47.373 "uuid": "709aa174-4c54-424a-a80d-d5212c036067", 00:06:47.373 "strip_size_kb": 64, 00:06:47.373 "state": "online", 00:06:47.373 "raid_level": "raid0", 00:06:47.373 "superblock": true, 00:06:47.373 "num_base_bdevs": 2, 00:06:47.373 "num_base_bdevs_discovered": 2, 00:06:47.373 "num_base_bdevs_operational": 2, 00:06:47.373 "base_bdevs_list": [ 00:06:47.373 { 00:06:47.373 "name": "BaseBdev1", 00:06:47.373 "uuid": "6c20d42e-f448-5408-84a9-8b1d84a442bf", 00:06:47.373 "is_configured": true, 00:06:47.373 "data_offset": 2048, 00:06:47.373 "data_size": 63488 00:06:47.373 }, 00:06:47.373 { 00:06:47.373 "name": "BaseBdev2", 00:06:47.373 "uuid": "88d87b53-9564-50db-aa0b-36cd57a46c46", 00:06:47.373 "is_configured": true, 00:06:47.373 "data_offset": 2048, 00:06:47.373 "data_size": 63488 00:06:47.373 } 00:06:47.373 ] 00:06:47.373 }' 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:47.373 21:39:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:47.942 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:47.942 21:39:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:06:47.942 [2024-11-27 21:39:10.906254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:06:48.880 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:48.881 "name": "raid_bdev1", 00:06:48.881 "uuid": "709aa174-4c54-424a-a80d-d5212c036067", 00:06:48.881 "strip_size_kb": 64, 00:06:48.881 "state": "online", 00:06:48.881 "raid_level": "raid0", 00:06:48.881 "superblock": true, 00:06:48.881 "num_base_bdevs": 2, 00:06:48.881 "num_base_bdevs_discovered": 2, 00:06:48.881 "num_base_bdevs_operational": 2, 00:06:48.881 "base_bdevs_list": [ 00:06:48.881 { 00:06:48.881 "name": "BaseBdev1", 00:06:48.881 "uuid": "6c20d42e-f448-5408-84a9-8b1d84a442bf", 00:06:48.881 "is_configured": true, 00:06:48.881 "data_offset": 2048, 00:06:48.881 "data_size": 63488 00:06:48.881 }, 00:06:48.881 { 00:06:48.881 "name": "BaseBdev2", 00:06:48.881 "uuid": "88d87b53-9564-50db-aa0b-36cd57a46c46", 00:06:48.881 "is_configured": true, 00:06:48.881 "data_offset": 2048, 00:06:48.881 "data_size": 63488 00:06:48.881 } 00:06:48.881 ] 00:06:48.881 }' 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:48.881 21:39:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.449 [2024-11-27 21:39:12.330454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:49.449 [2024-11-27 21:39:12.330524] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:49.449 [2024-11-27 21:39:12.333076] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.449 [2024-11-27 21:39:12.333171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.449 [2024-11-27 21:39:12.333250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.449 [2024-11-27 21:39:12.333317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:06:49.449 { 00:06:49.449 "results": [ 00:06:49.449 { 00:06:49.449 "job": "raid_bdev1", 00:06:49.449 "core_mask": "0x1", 00:06:49.449 "workload": "randrw", 00:06:49.449 "percentage": 50, 00:06:49.449 "status": "finished", 00:06:49.449 "queue_depth": 1, 00:06:49.449 "io_size": 131072, 00:06:49.449 "runtime": 1.425349, 00:06:49.449 "iops": 16946.72673148822, 00:06:49.449 "mibps": 2118.3408414360274, 00:06:49.449 "io_failed": 1, 00:06:49.449 "io_timeout": 0, 00:06:49.449 "avg_latency_us": 81.34385880423535, 00:06:49.449 "min_latency_us": 25.2646288209607, 00:06:49.449 "max_latency_us": 1445.2262008733624 00:06:49.449 } 00:06:49.449 ], 00:06:49.449 "core_count": 1 00:06:49.449 } 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72670 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72670 ']' 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72670 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72670 00:06:49.449 killing process with pid 72670 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72670' 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72670 00:06:49.449 [2024-11-27 21:39:12.371308] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:49.449 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72670 00:06:49.449 [2024-11-27 21:39:12.385873] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:49.708 21:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9IuKb1ieKQ 00:06:49.708 21:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:06:49.708 21:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:06:49.708 21:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:06:49.708 21:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:06:49.708 ************************************ 00:06:49.708 END TEST raid_write_error_test 00:06:49.708 ************************************ 00:06:49.708 21:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:49.708 21:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:49.709 21:39:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:06:49.709 00:06:49.709 real 0m3.226s 00:06:49.709 user 0m4.150s 00:06:49.709 sys 0m0.481s 00:06:49.709 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.709 21:39:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.709 21:39:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:49.709 21:39:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:06:49.709 21:39:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:49.709 21:39:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.709 21:39:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:49.709 ************************************ 00:06:49.709 START TEST raid_state_function_test 00:06:49.709 ************************************ 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72797 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72797' 00:06:49.709 Process raid pid: 72797 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72797 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72797 ']' 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.709 21:39:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.709 [2024-11-27 21:39:12.752928] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:49.709 [2024-11-27 21:39:12.753117] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.968 [2024-11-27 21:39:12.906794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.968 [2024-11-27 21:39:12.931364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.968 [2024-11-27 21:39:12.973177] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:49.968 [2024-11-27 21:39:12.973212] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.536 [2024-11-27 21:39:13.583573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:50.536 [2024-11-27 21:39:13.583701] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:50.536 [2024-11-27 21:39:13.583745] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:50.536 [2024-11-27 21:39:13.583793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:50.536 "name": "Existed_Raid", 00:06:50.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.536 "strip_size_kb": 64, 00:06:50.536 "state": "configuring", 00:06:50.536 "raid_level": "concat", 00:06:50.536 "superblock": false, 00:06:50.536 "num_base_bdevs": 2, 00:06:50.536 "num_base_bdevs_discovered": 0, 00:06:50.536 "num_base_bdevs_operational": 2, 00:06:50.536 "base_bdevs_list": [ 00:06:50.536 { 00:06:50.536 "name": "BaseBdev1", 00:06:50.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.536 "is_configured": false, 00:06:50.536 "data_offset": 0, 00:06:50.536 "data_size": 0 00:06:50.536 }, 00:06:50.536 { 00:06:50.536 "name": "BaseBdev2", 00:06:50.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:50.536 "is_configured": false, 00:06:50.536 "data_offset": 0, 00:06:50.536 "data_size": 0 00:06:50.536 } 00:06:50.536 ] 00:06:50.536 }' 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:50.536 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.106 21:39:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:51.106 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.106 21:39:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.106 [2024-11-27 21:39:13.998784] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:51.106 [2024-11-27 21:39:13.998880] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.106 [2024-11-27 21:39:14.010770] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:51.106 [2024-11-27 21:39:14.010870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:51.106 [2024-11-27 21:39:14.010911] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:51.106 [2024-11-27 21:39:14.010982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.106 [2024-11-27 21:39:14.031557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:51.106 BaseBdev1 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.106 [ 00:06:51.106 { 00:06:51.106 "name": "BaseBdev1", 00:06:51.106 "aliases": [ 00:06:51.106 "9862b542-ae4e-4b11-af9e-da529336a77f" 00:06:51.106 ], 00:06:51.106 "product_name": "Malloc disk", 00:06:51.106 "block_size": 512, 00:06:51.106 "num_blocks": 65536, 00:06:51.106 "uuid": "9862b542-ae4e-4b11-af9e-da529336a77f", 00:06:51.106 "assigned_rate_limits": { 00:06:51.106 "rw_ios_per_sec": 0, 00:06:51.106 "rw_mbytes_per_sec": 0, 00:06:51.106 "r_mbytes_per_sec": 0, 00:06:51.106 "w_mbytes_per_sec": 0 00:06:51.106 }, 00:06:51.106 "claimed": true, 00:06:51.106 "claim_type": "exclusive_write", 00:06:51.106 "zoned": false, 00:06:51.106 "supported_io_types": { 00:06:51.106 "read": true, 00:06:51.106 "write": true, 00:06:51.106 "unmap": true, 00:06:51.106 "flush": true, 00:06:51.106 "reset": true, 00:06:51.106 "nvme_admin": false, 00:06:51.106 "nvme_io": false, 00:06:51.106 "nvme_io_md": false, 00:06:51.106 "write_zeroes": true, 00:06:51.106 "zcopy": true, 00:06:51.106 "get_zone_info": false, 00:06:51.106 "zone_management": false, 00:06:51.106 "zone_append": false, 00:06:51.106 "compare": false, 00:06:51.106 "compare_and_write": false, 00:06:51.106 "abort": true, 00:06:51.106 "seek_hole": false, 00:06:51.106 "seek_data": false, 00:06:51.106 "copy": true, 00:06:51.106 "nvme_iov_md": false 00:06:51.106 }, 00:06:51.106 "memory_domains": [ 00:06:51.106 { 00:06:51.106 "dma_device_id": "system", 00:06:51.106 "dma_device_type": 1 00:06:51.106 }, 00:06:51.106 { 00:06:51.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.106 "dma_device_type": 2 00:06:51.106 } 00:06:51.106 ], 00:06:51.106 "driver_specific": {} 00:06:51.106 } 00:06:51.106 ] 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.106 "name": "Existed_Raid", 00:06:51.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.106 "strip_size_kb": 64, 00:06:51.106 "state": "configuring", 00:06:51.106 "raid_level": "concat", 00:06:51.106 "superblock": false, 00:06:51.106 "num_base_bdevs": 2, 00:06:51.106 "num_base_bdevs_discovered": 1, 00:06:51.106 "num_base_bdevs_operational": 2, 00:06:51.106 "base_bdevs_list": [ 00:06:51.106 { 00:06:51.106 "name": "BaseBdev1", 00:06:51.106 "uuid": "9862b542-ae4e-4b11-af9e-da529336a77f", 00:06:51.106 "is_configured": true, 00:06:51.106 "data_offset": 0, 00:06:51.106 "data_size": 65536 00:06:51.106 }, 00:06:51.106 { 00:06:51.106 "name": "BaseBdev2", 00:06:51.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.106 "is_configured": false, 00:06:51.106 "data_offset": 0, 00:06:51.106 "data_size": 0 00:06:51.106 } 00:06:51.106 ] 00:06:51.106 }' 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.106 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.366 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:51.366 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.366 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.366 [2024-11-27 21:39:14.474865] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:51.366 [2024-11-27 21:39:14.474967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:51.366 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.366 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:51.366 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.366 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.366 [2024-11-27 21:39:14.486874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:51.625 [2024-11-27 21:39:14.488918] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:51.625 [2024-11-27 21:39:14.489016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.625 "name": "Existed_Raid", 00:06:51.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.625 "strip_size_kb": 64, 00:06:51.625 "state": "configuring", 00:06:51.625 "raid_level": "concat", 00:06:51.625 "superblock": false, 00:06:51.625 "num_base_bdevs": 2, 00:06:51.625 "num_base_bdevs_discovered": 1, 00:06:51.625 "num_base_bdevs_operational": 2, 00:06:51.625 "base_bdevs_list": [ 00:06:51.625 { 00:06:51.625 "name": "BaseBdev1", 00:06:51.625 "uuid": "9862b542-ae4e-4b11-af9e-da529336a77f", 00:06:51.625 "is_configured": true, 00:06:51.625 "data_offset": 0, 00:06:51.625 "data_size": 65536 00:06:51.625 }, 00:06:51.625 { 00:06:51.625 "name": "BaseBdev2", 00:06:51.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.625 "is_configured": false, 00:06:51.625 "data_offset": 0, 00:06:51.625 "data_size": 0 00:06:51.625 } 00:06:51.625 ] 00:06:51.625 }' 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.625 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.884 [2024-11-27 21:39:14.936939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:51.884 [2024-11-27 21:39:14.937050] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:51.884 [2024-11-27 21:39:14.937109] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:51.884 [2024-11-27 21:39:14.937452] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:51.884 [2024-11-27 21:39:14.937656] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:51.884 [2024-11-27 21:39:14.937710] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:51.884 [2024-11-27 21:39:14.938002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.884 BaseBdev2 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.884 [ 00:06:51.884 { 00:06:51.884 "name": "BaseBdev2", 00:06:51.884 "aliases": [ 00:06:51.884 "e72e9168-a1a6-4e4e-9bac-3d0fb9b4245b" 00:06:51.884 ], 00:06:51.884 "product_name": "Malloc disk", 00:06:51.884 "block_size": 512, 00:06:51.884 "num_blocks": 65536, 00:06:51.884 "uuid": "e72e9168-a1a6-4e4e-9bac-3d0fb9b4245b", 00:06:51.884 "assigned_rate_limits": { 00:06:51.884 "rw_ios_per_sec": 0, 00:06:51.884 "rw_mbytes_per_sec": 0, 00:06:51.884 "r_mbytes_per_sec": 0, 00:06:51.884 "w_mbytes_per_sec": 0 00:06:51.884 }, 00:06:51.884 "claimed": true, 00:06:51.884 "claim_type": "exclusive_write", 00:06:51.884 "zoned": false, 00:06:51.884 "supported_io_types": { 00:06:51.884 "read": true, 00:06:51.884 "write": true, 00:06:51.884 "unmap": true, 00:06:51.884 "flush": true, 00:06:51.884 "reset": true, 00:06:51.884 "nvme_admin": false, 00:06:51.884 "nvme_io": false, 00:06:51.884 "nvme_io_md": false, 00:06:51.884 "write_zeroes": true, 00:06:51.884 "zcopy": true, 00:06:51.884 "get_zone_info": false, 00:06:51.884 "zone_management": false, 00:06:51.884 "zone_append": false, 00:06:51.884 "compare": false, 00:06:51.884 "compare_and_write": false, 00:06:51.884 "abort": true, 00:06:51.884 "seek_hole": false, 00:06:51.884 "seek_data": false, 00:06:51.884 "copy": true, 00:06:51.884 "nvme_iov_md": false 00:06:51.884 }, 00:06:51.884 "memory_domains": [ 00:06:51.884 { 00:06:51.884 "dma_device_id": "system", 00:06:51.884 "dma_device_type": 1 00:06:51.884 }, 00:06:51.884 { 00:06:51.884 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.884 "dma_device_type": 2 00:06:51.884 } 00:06:51.884 ], 00:06:51.884 "driver_specific": {} 00:06:51.884 } 00:06:51.884 ] 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.884 21:39:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.143 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.143 "name": "Existed_Raid", 00:06:52.143 "uuid": "d106f278-7760-491d-acf1-ae65139f856f", 00:06:52.143 "strip_size_kb": 64, 00:06:52.143 "state": "online", 00:06:52.143 "raid_level": "concat", 00:06:52.143 "superblock": false, 00:06:52.143 "num_base_bdevs": 2, 00:06:52.143 "num_base_bdevs_discovered": 2, 00:06:52.143 "num_base_bdevs_operational": 2, 00:06:52.144 "base_bdevs_list": [ 00:06:52.144 { 00:06:52.144 "name": "BaseBdev1", 00:06:52.144 "uuid": "9862b542-ae4e-4b11-af9e-da529336a77f", 00:06:52.144 "is_configured": true, 00:06:52.144 "data_offset": 0, 00:06:52.144 "data_size": 65536 00:06:52.144 }, 00:06:52.144 { 00:06:52.144 "name": "BaseBdev2", 00:06:52.144 "uuid": "e72e9168-a1a6-4e4e-9bac-3d0fb9b4245b", 00:06:52.144 "is_configured": true, 00:06:52.144 "data_offset": 0, 00:06:52.144 "data_size": 65536 00:06:52.144 } 00:06:52.144 ] 00:06:52.144 }' 00:06:52.144 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.144 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.402 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:52.402 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:52.402 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:52.402 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:52.402 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:52.402 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:52.402 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:52.402 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:52.402 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.402 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.402 [2024-11-27 21:39:15.444407] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.402 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.402 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:52.402 "name": "Existed_Raid", 00:06:52.402 "aliases": [ 00:06:52.402 "d106f278-7760-491d-acf1-ae65139f856f" 00:06:52.402 ], 00:06:52.402 "product_name": "Raid Volume", 00:06:52.402 "block_size": 512, 00:06:52.402 "num_blocks": 131072, 00:06:52.402 "uuid": "d106f278-7760-491d-acf1-ae65139f856f", 00:06:52.402 "assigned_rate_limits": { 00:06:52.402 "rw_ios_per_sec": 0, 00:06:52.402 "rw_mbytes_per_sec": 0, 00:06:52.402 "r_mbytes_per_sec": 0, 00:06:52.402 "w_mbytes_per_sec": 0 00:06:52.402 }, 00:06:52.402 "claimed": false, 00:06:52.402 "zoned": false, 00:06:52.402 "supported_io_types": { 00:06:52.402 "read": true, 00:06:52.402 "write": true, 00:06:52.402 "unmap": true, 00:06:52.402 "flush": true, 00:06:52.402 "reset": true, 00:06:52.402 "nvme_admin": false, 00:06:52.402 "nvme_io": false, 00:06:52.402 "nvme_io_md": false, 00:06:52.402 "write_zeroes": true, 00:06:52.402 "zcopy": false, 00:06:52.402 "get_zone_info": false, 00:06:52.402 "zone_management": false, 00:06:52.402 "zone_append": false, 00:06:52.402 "compare": false, 00:06:52.402 "compare_and_write": false, 00:06:52.402 "abort": false, 00:06:52.402 "seek_hole": false, 00:06:52.402 "seek_data": false, 00:06:52.402 "copy": false, 00:06:52.402 "nvme_iov_md": false 00:06:52.402 }, 00:06:52.402 "memory_domains": [ 00:06:52.402 { 00:06:52.402 "dma_device_id": "system", 00:06:52.402 "dma_device_type": 1 00:06:52.402 }, 00:06:52.402 { 00:06:52.402 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.402 "dma_device_type": 2 00:06:52.402 }, 00:06:52.402 { 00:06:52.403 "dma_device_id": "system", 00:06:52.403 "dma_device_type": 1 00:06:52.403 }, 00:06:52.403 { 00:06:52.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.403 "dma_device_type": 2 00:06:52.403 } 00:06:52.403 ], 00:06:52.403 "driver_specific": { 00:06:52.403 "raid": { 00:06:52.403 "uuid": "d106f278-7760-491d-acf1-ae65139f856f", 00:06:52.403 "strip_size_kb": 64, 00:06:52.403 "state": "online", 00:06:52.403 "raid_level": "concat", 00:06:52.403 "superblock": false, 00:06:52.403 "num_base_bdevs": 2, 00:06:52.403 "num_base_bdevs_discovered": 2, 00:06:52.403 "num_base_bdevs_operational": 2, 00:06:52.403 "base_bdevs_list": [ 00:06:52.403 { 00:06:52.403 "name": "BaseBdev1", 00:06:52.403 "uuid": "9862b542-ae4e-4b11-af9e-da529336a77f", 00:06:52.403 "is_configured": true, 00:06:52.403 "data_offset": 0, 00:06:52.403 "data_size": 65536 00:06:52.403 }, 00:06:52.403 { 00:06:52.403 "name": "BaseBdev2", 00:06:52.403 "uuid": "e72e9168-a1a6-4e4e-9bac-3d0fb9b4245b", 00:06:52.403 "is_configured": true, 00:06:52.403 "data_offset": 0, 00:06:52.403 "data_size": 65536 00:06:52.403 } 00:06:52.403 ] 00:06:52.403 } 00:06:52.403 } 00:06:52.403 }' 00:06:52.403 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:52.662 BaseBdev2' 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.662 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.663 [2024-11-27 21:39:15.659812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:52.663 [2024-11-27 21:39:15.659878] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:52.663 [2024-11-27 21:39:15.659980] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.663 "name": "Existed_Raid", 00:06:52.663 "uuid": "d106f278-7760-491d-acf1-ae65139f856f", 00:06:52.663 "strip_size_kb": 64, 00:06:52.663 "state": "offline", 00:06:52.663 "raid_level": "concat", 00:06:52.663 "superblock": false, 00:06:52.663 "num_base_bdevs": 2, 00:06:52.663 "num_base_bdevs_discovered": 1, 00:06:52.663 "num_base_bdevs_operational": 1, 00:06:52.663 "base_bdevs_list": [ 00:06:52.663 { 00:06:52.663 "name": null, 00:06:52.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.663 "is_configured": false, 00:06:52.663 "data_offset": 0, 00:06:52.663 "data_size": 65536 00:06:52.663 }, 00:06:52.663 { 00:06:52.663 "name": "BaseBdev2", 00:06:52.663 "uuid": "e72e9168-a1a6-4e4e-9bac-3d0fb9b4245b", 00:06:52.663 "is_configured": true, 00:06:52.663 "data_offset": 0, 00:06:52.663 "data_size": 65536 00:06:52.663 } 00:06:52.663 ] 00:06:52.663 }' 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.663 21:39:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.232 [2024-11-27 21:39:16.146254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:53.232 [2024-11-27 21:39:16.146361] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72797 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72797 ']' 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72797 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72797 00:06:53.232 killing process with pid 72797 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72797' 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72797 00:06:53.232 [2024-11-27 21:39:16.246464] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:53.232 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72797 00:06:53.232 [2024-11-27 21:39:16.247443] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:53.492 ************************************ 00:06:53.492 END TEST raid_state_function_test 00:06:53.492 ************************************ 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:53.492 00:06:53.492 real 0m3.797s 00:06:53.492 user 0m6.001s 00:06:53.492 sys 0m0.735s 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.492 21:39:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:06:53.492 21:39:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:53.492 21:39:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.492 21:39:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:53.492 ************************************ 00:06:53.492 START TEST raid_state_function_test_sb 00:06:53.492 ************************************ 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73039 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73039' 00:06:53.492 Process raid pid: 73039 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73039 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73039 ']' 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.492 21:39:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:53.752 [2024-11-27 21:39:16.615733] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:53.752 [2024-11-27 21:39:16.615879] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.752 [2024-11-27 21:39:16.748268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.752 [2024-11-27 21:39:16.772621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.752 [2024-11-27 21:39:16.814039] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.752 [2024-11-27 21:39:16.814072] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.322 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.322 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:06:54.322 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:54.322 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.322 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.580 [2024-11-27 21:39:17.448259] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:54.580 [2024-11-27 21:39:17.448309] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:54.580 [2024-11-27 21:39:17.448318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:54.580 [2024-11-27 21:39:17.448329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.580 "name": "Existed_Raid", 00:06:54.580 "uuid": "674f0269-fde1-47dd-87bb-ee7513157562", 00:06:54.580 "strip_size_kb": 64, 00:06:54.580 "state": "configuring", 00:06:54.580 "raid_level": "concat", 00:06:54.580 "superblock": true, 00:06:54.580 "num_base_bdevs": 2, 00:06:54.580 "num_base_bdevs_discovered": 0, 00:06:54.580 "num_base_bdevs_operational": 2, 00:06:54.580 "base_bdevs_list": [ 00:06:54.580 { 00:06:54.580 "name": "BaseBdev1", 00:06:54.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.580 "is_configured": false, 00:06:54.580 "data_offset": 0, 00:06:54.580 "data_size": 0 00:06:54.580 }, 00:06:54.580 { 00:06:54.580 "name": "BaseBdev2", 00:06:54.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.580 "is_configured": false, 00:06:54.580 "data_offset": 0, 00:06:54.580 "data_size": 0 00:06:54.580 } 00:06:54.580 ] 00:06:54.580 }' 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.580 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.840 [2024-11-27 21:39:17.827541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:54.840 [2024-11-27 21:39:17.827662] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.840 [2024-11-27 21:39:17.839521] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:54.840 [2024-11-27 21:39:17.839613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:54.840 [2024-11-27 21:39:17.839653] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:54.840 [2024-11-27 21:39:17.839722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.840 [2024-11-27 21:39:17.860198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:54.840 BaseBdev1 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.840 [ 00:06:54.840 { 00:06:54.840 "name": "BaseBdev1", 00:06:54.840 "aliases": [ 00:06:54.840 "eeb34f47-fdaa-41c4-80eb-d76707d7d7e6" 00:06:54.840 ], 00:06:54.840 "product_name": "Malloc disk", 00:06:54.840 "block_size": 512, 00:06:54.840 "num_blocks": 65536, 00:06:54.840 "uuid": "eeb34f47-fdaa-41c4-80eb-d76707d7d7e6", 00:06:54.840 "assigned_rate_limits": { 00:06:54.840 "rw_ios_per_sec": 0, 00:06:54.840 "rw_mbytes_per_sec": 0, 00:06:54.840 "r_mbytes_per_sec": 0, 00:06:54.840 "w_mbytes_per_sec": 0 00:06:54.840 }, 00:06:54.840 "claimed": true, 00:06:54.840 "claim_type": "exclusive_write", 00:06:54.840 "zoned": false, 00:06:54.840 "supported_io_types": { 00:06:54.840 "read": true, 00:06:54.840 "write": true, 00:06:54.840 "unmap": true, 00:06:54.840 "flush": true, 00:06:54.840 "reset": true, 00:06:54.840 "nvme_admin": false, 00:06:54.840 "nvme_io": false, 00:06:54.840 "nvme_io_md": false, 00:06:54.840 "write_zeroes": true, 00:06:54.840 "zcopy": true, 00:06:54.840 "get_zone_info": false, 00:06:54.840 "zone_management": false, 00:06:54.840 "zone_append": false, 00:06:54.840 "compare": false, 00:06:54.840 "compare_and_write": false, 00:06:54.840 "abort": true, 00:06:54.840 "seek_hole": false, 00:06:54.840 "seek_data": false, 00:06:54.840 "copy": true, 00:06:54.840 "nvme_iov_md": false 00:06:54.840 }, 00:06:54.840 "memory_domains": [ 00:06:54.840 { 00:06:54.840 "dma_device_id": "system", 00:06:54.840 "dma_device_type": 1 00:06:54.840 }, 00:06:54.840 { 00:06:54.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:54.840 "dma_device_type": 2 00:06:54.840 } 00:06:54.840 ], 00:06:54.840 "driver_specific": {} 00:06:54.840 } 00:06:54.840 ] 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:54.840 "name": "Existed_Raid", 00:06:54.840 "uuid": "18173b42-b463-4a6f-aa2e-6a50c873e0ef", 00:06:54.840 "strip_size_kb": 64, 00:06:54.840 "state": "configuring", 00:06:54.840 "raid_level": "concat", 00:06:54.840 "superblock": true, 00:06:54.840 "num_base_bdevs": 2, 00:06:54.840 "num_base_bdevs_discovered": 1, 00:06:54.840 "num_base_bdevs_operational": 2, 00:06:54.840 "base_bdevs_list": [ 00:06:54.840 { 00:06:54.840 "name": "BaseBdev1", 00:06:54.840 "uuid": "eeb34f47-fdaa-41c4-80eb-d76707d7d7e6", 00:06:54.840 "is_configured": true, 00:06:54.840 "data_offset": 2048, 00:06:54.840 "data_size": 63488 00:06:54.840 }, 00:06:54.840 { 00:06:54.840 "name": "BaseBdev2", 00:06:54.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:54.840 "is_configured": false, 00:06:54.840 "data_offset": 0, 00:06:54.840 "data_size": 0 00:06:54.840 } 00:06:54.840 ] 00:06:54.840 }' 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:54.840 21:39:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.411 [2024-11-27 21:39:18.343413] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:55.411 [2024-11-27 21:39:18.343535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.411 [2024-11-27 21:39:18.355425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:55.411 [2024-11-27 21:39:18.357277] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:55.411 [2024-11-27 21:39:18.357366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.411 "name": "Existed_Raid", 00:06:55.411 "uuid": "907d61c9-d960-4245-b60a-9beeb058ff4b", 00:06:55.411 "strip_size_kb": 64, 00:06:55.411 "state": "configuring", 00:06:55.411 "raid_level": "concat", 00:06:55.411 "superblock": true, 00:06:55.411 "num_base_bdevs": 2, 00:06:55.411 "num_base_bdevs_discovered": 1, 00:06:55.411 "num_base_bdevs_operational": 2, 00:06:55.411 "base_bdevs_list": [ 00:06:55.411 { 00:06:55.411 "name": "BaseBdev1", 00:06:55.411 "uuid": "eeb34f47-fdaa-41c4-80eb-d76707d7d7e6", 00:06:55.411 "is_configured": true, 00:06:55.411 "data_offset": 2048, 00:06:55.411 "data_size": 63488 00:06:55.411 }, 00:06:55.411 { 00:06:55.411 "name": "BaseBdev2", 00:06:55.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.411 "is_configured": false, 00:06:55.411 "data_offset": 0, 00:06:55.411 "data_size": 0 00:06:55.411 } 00:06:55.411 ] 00:06:55.411 }' 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.411 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.671 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:55.671 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.671 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.671 BaseBdev2 00:06:55.671 [2024-11-27 21:39:18.781514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:55.671 [2024-11-27 21:39:18.781730] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:55.671 [2024-11-27 21:39:18.781751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:55.671 [2024-11-27 21:39:18.782014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:55.671 [2024-11-27 21:39:18.782170] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:55.671 [2024-11-27 21:39:18.782190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:06:55.671 [2024-11-27 21:39:18.782314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:55.672 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.672 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:55.672 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:55.672 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:55.672 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:55.672 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:55.672 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:55.672 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:55.672 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.672 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.932 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.933 [ 00:06:55.933 { 00:06:55.933 "name": "BaseBdev2", 00:06:55.933 "aliases": [ 00:06:55.933 "e7e67809-820d-4167-af67-436b4e6b9867" 00:06:55.933 ], 00:06:55.933 "product_name": "Malloc disk", 00:06:55.933 "block_size": 512, 00:06:55.933 "num_blocks": 65536, 00:06:55.933 "uuid": "e7e67809-820d-4167-af67-436b4e6b9867", 00:06:55.933 "assigned_rate_limits": { 00:06:55.933 "rw_ios_per_sec": 0, 00:06:55.933 "rw_mbytes_per_sec": 0, 00:06:55.933 "r_mbytes_per_sec": 0, 00:06:55.933 "w_mbytes_per_sec": 0 00:06:55.933 }, 00:06:55.933 "claimed": true, 00:06:55.933 "claim_type": "exclusive_write", 00:06:55.933 "zoned": false, 00:06:55.933 "supported_io_types": { 00:06:55.933 "read": true, 00:06:55.933 "write": true, 00:06:55.933 "unmap": true, 00:06:55.933 "flush": true, 00:06:55.933 "reset": true, 00:06:55.933 "nvme_admin": false, 00:06:55.933 "nvme_io": false, 00:06:55.933 "nvme_io_md": false, 00:06:55.933 "write_zeroes": true, 00:06:55.933 "zcopy": true, 00:06:55.933 "get_zone_info": false, 00:06:55.933 "zone_management": false, 00:06:55.933 "zone_append": false, 00:06:55.933 "compare": false, 00:06:55.933 "compare_and_write": false, 00:06:55.933 "abort": true, 00:06:55.933 "seek_hole": false, 00:06:55.933 "seek_data": false, 00:06:55.933 "copy": true, 00:06:55.933 "nvme_iov_md": false 00:06:55.933 }, 00:06:55.933 "memory_domains": [ 00:06:55.933 { 00:06:55.933 "dma_device_id": "system", 00:06:55.933 "dma_device_type": 1 00:06:55.933 }, 00:06:55.933 { 00:06:55.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:55.933 "dma_device_type": 2 00:06:55.933 } 00:06:55.933 ], 00:06:55.933 "driver_specific": {} 00:06:55.933 } 00:06:55.933 ] 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:55.933 "name": "Existed_Raid", 00:06:55.933 "uuid": "907d61c9-d960-4245-b60a-9beeb058ff4b", 00:06:55.933 "strip_size_kb": 64, 00:06:55.933 "state": "online", 00:06:55.933 "raid_level": "concat", 00:06:55.933 "superblock": true, 00:06:55.933 "num_base_bdevs": 2, 00:06:55.933 "num_base_bdevs_discovered": 2, 00:06:55.933 "num_base_bdevs_operational": 2, 00:06:55.933 "base_bdevs_list": [ 00:06:55.933 { 00:06:55.933 "name": "BaseBdev1", 00:06:55.933 "uuid": "eeb34f47-fdaa-41c4-80eb-d76707d7d7e6", 00:06:55.933 "is_configured": true, 00:06:55.933 "data_offset": 2048, 00:06:55.933 "data_size": 63488 00:06:55.933 }, 00:06:55.933 { 00:06:55.933 "name": "BaseBdev2", 00:06:55.933 "uuid": "e7e67809-820d-4167-af67-436b4e6b9867", 00:06:55.933 "is_configured": true, 00:06:55.933 "data_offset": 2048, 00:06:55.933 "data_size": 63488 00:06:55.933 } 00:06:55.933 ] 00:06:55.933 }' 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:55.933 21:39:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.193 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:56.193 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:56.193 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:56.193 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:56.193 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:56.193 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:56.193 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:56.193 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:56.193 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.193 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.193 [2024-11-27 21:39:19.233046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.193 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.193 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:56.193 "name": "Existed_Raid", 00:06:56.193 "aliases": [ 00:06:56.193 "907d61c9-d960-4245-b60a-9beeb058ff4b" 00:06:56.193 ], 00:06:56.193 "product_name": "Raid Volume", 00:06:56.193 "block_size": 512, 00:06:56.193 "num_blocks": 126976, 00:06:56.193 "uuid": "907d61c9-d960-4245-b60a-9beeb058ff4b", 00:06:56.193 "assigned_rate_limits": { 00:06:56.193 "rw_ios_per_sec": 0, 00:06:56.193 "rw_mbytes_per_sec": 0, 00:06:56.193 "r_mbytes_per_sec": 0, 00:06:56.193 "w_mbytes_per_sec": 0 00:06:56.193 }, 00:06:56.193 "claimed": false, 00:06:56.193 "zoned": false, 00:06:56.193 "supported_io_types": { 00:06:56.193 "read": true, 00:06:56.193 "write": true, 00:06:56.193 "unmap": true, 00:06:56.193 "flush": true, 00:06:56.193 "reset": true, 00:06:56.193 "nvme_admin": false, 00:06:56.193 "nvme_io": false, 00:06:56.193 "nvme_io_md": false, 00:06:56.194 "write_zeroes": true, 00:06:56.194 "zcopy": false, 00:06:56.194 "get_zone_info": false, 00:06:56.194 "zone_management": false, 00:06:56.194 "zone_append": false, 00:06:56.194 "compare": false, 00:06:56.194 "compare_and_write": false, 00:06:56.194 "abort": false, 00:06:56.194 "seek_hole": false, 00:06:56.194 "seek_data": false, 00:06:56.194 "copy": false, 00:06:56.194 "nvme_iov_md": false 00:06:56.194 }, 00:06:56.194 "memory_domains": [ 00:06:56.194 { 00:06:56.194 "dma_device_id": "system", 00:06:56.194 "dma_device_type": 1 00:06:56.194 }, 00:06:56.194 { 00:06:56.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.194 "dma_device_type": 2 00:06:56.194 }, 00:06:56.194 { 00:06:56.194 "dma_device_id": "system", 00:06:56.194 "dma_device_type": 1 00:06:56.194 }, 00:06:56.194 { 00:06:56.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.194 "dma_device_type": 2 00:06:56.194 } 00:06:56.194 ], 00:06:56.194 "driver_specific": { 00:06:56.194 "raid": { 00:06:56.194 "uuid": "907d61c9-d960-4245-b60a-9beeb058ff4b", 00:06:56.194 "strip_size_kb": 64, 00:06:56.194 "state": "online", 00:06:56.194 "raid_level": "concat", 00:06:56.194 "superblock": true, 00:06:56.194 "num_base_bdevs": 2, 00:06:56.194 "num_base_bdevs_discovered": 2, 00:06:56.194 "num_base_bdevs_operational": 2, 00:06:56.194 "base_bdevs_list": [ 00:06:56.194 { 00:06:56.194 "name": "BaseBdev1", 00:06:56.194 "uuid": "eeb34f47-fdaa-41c4-80eb-d76707d7d7e6", 00:06:56.194 "is_configured": true, 00:06:56.194 "data_offset": 2048, 00:06:56.194 "data_size": 63488 00:06:56.194 }, 00:06:56.194 { 00:06:56.194 "name": "BaseBdev2", 00:06:56.194 "uuid": "e7e67809-820d-4167-af67-436b4e6b9867", 00:06:56.194 "is_configured": true, 00:06:56.194 "data_offset": 2048, 00:06:56.194 "data_size": 63488 00:06:56.194 } 00:06:56.194 ] 00:06:56.194 } 00:06:56.194 } 00:06:56.194 }' 00:06:56.194 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:56.194 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:56.194 BaseBdev2' 00:06:56.194 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.454 [2024-11-27 21:39:19.460461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:56.454 [2024-11-27 21:39:19.460491] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:56.454 [2024-11-27 21:39:19.460558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.454 "name": "Existed_Raid", 00:06:56.454 "uuid": "907d61c9-d960-4245-b60a-9beeb058ff4b", 00:06:56.454 "strip_size_kb": 64, 00:06:56.454 "state": "offline", 00:06:56.454 "raid_level": "concat", 00:06:56.454 "superblock": true, 00:06:56.454 "num_base_bdevs": 2, 00:06:56.454 "num_base_bdevs_discovered": 1, 00:06:56.454 "num_base_bdevs_operational": 1, 00:06:56.454 "base_bdevs_list": [ 00:06:56.454 { 00:06:56.454 "name": null, 00:06:56.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.454 "is_configured": false, 00:06:56.454 "data_offset": 0, 00:06:56.454 "data_size": 63488 00:06:56.454 }, 00:06:56.454 { 00:06:56.454 "name": "BaseBdev2", 00:06:56.454 "uuid": "e7e67809-820d-4167-af67-436b4e6b9867", 00:06:56.454 "is_configured": true, 00:06:56.454 "data_offset": 2048, 00:06:56.454 "data_size": 63488 00:06:56.454 } 00:06:56.454 ] 00:06:56.454 }' 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.454 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.023 [2024-11-27 21:39:19.987031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:57.023 [2024-11-27 21:39:19.987145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:57.023 21:39:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73039 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73039 ']' 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73039 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73039 00:06:57.023 killing process with pid 73039 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73039' 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73039 00:06:57.023 [2024-11-27 21:39:20.102208] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.023 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73039 00:06:57.023 [2024-11-27 21:39:20.103188] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.284 21:39:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:06:57.284 00:06:57.284 real 0m3.786s 00:06:57.284 user 0m5.995s 00:06:57.284 sys 0m0.732s 00:06:57.284 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.284 21:39:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.284 ************************************ 00:06:57.284 END TEST raid_state_function_test_sb 00:06:57.284 ************************************ 00:06:57.284 21:39:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:06:57.284 21:39:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:57.284 21:39:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.284 21:39:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.284 ************************************ 00:06:57.284 START TEST raid_superblock_test 00:06:57.284 ************************************ 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73269 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73269 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73269 ']' 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.284 21:39:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.544 [2024-11-27 21:39:20.464252] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:06:57.544 [2024-11-27 21:39:20.464818] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73269 ] 00:06:57.544 [2024-11-27 21:39:20.620659] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.544 [2024-11-27 21:39:20.645144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.804 [2024-11-27 21:39:20.687424] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:57.804 [2024-11-27 21:39:20.687570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.375 malloc1 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.375 [2024-11-27 21:39:21.319398] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:58.375 [2024-11-27 21:39:21.319458] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:58.375 [2024-11-27 21:39:21.319489] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:06:58.375 [2024-11-27 21:39:21.319504] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:58.375 [2024-11-27 21:39:21.321610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:58.375 [2024-11-27 21:39:21.321653] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:58.375 pt1 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.375 malloc2 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.375 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.375 [2024-11-27 21:39:21.347807] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:58.375 [2024-11-27 21:39:21.347901] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:58.376 [2024-11-27 21:39:21.347957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:58.376 [2024-11-27 21:39:21.348004] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:58.376 [2024-11-27 21:39:21.350060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:58.376 [2024-11-27 21:39:21.350131] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:58.376 pt2 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.376 [2024-11-27 21:39:21.359839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:58.376 [2024-11-27 21:39:21.361687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:58.376 [2024-11-27 21:39:21.361905] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:06:58.376 [2024-11-27 21:39:21.361958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:58.376 [2024-11-27 21:39:21.362273] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:06:58.376 [2024-11-27 21:39:21.362458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:06:58.376 [2024-11-27 21:39:21.362503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:06:58.376 [2024-11-27 21:39:21.362704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.376 "name": "raid_bdev1", 00:06:58.376 "uuid": "02c92a1c-b4ea-4547-9f5f-4e8a198f1e32", 00:06:58.376 "strip_size_kb": 64, 00:06:58.376 "state": "online", 00:06:58.376 "raid_level": "concat", 00:06:58.376 "superblock": true, 00:06:58.376 "num_base_bdevs": 2, 00:06:58.376 "num_base_bdevs_discovered": 2, 00:06:58.376 "num_base_bdevs_operational": 2, 00:06:58.376 "base_bdevs_list": [ 00:06:58.376 { 00:06:58.376 "name": "pt1", 00:06:58.376 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:58.376 "is_configured": true, 00:06:58.376 "data_offset": 2048, 00:06:58.376 "data_size": 63488 00:06:58.376 }, 00:06:58.376 { 00:06:58.376 "name": "pt2", 00:06:58.376 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:58.376 "is_configured": true, 00:06:58.376 "data_offset": 2048, 00:06:58.376 "data_size": 63488 00:06:58.376 } 00:06:58.376 ] 00:06:58.376 }' 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.376 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.662 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:06:58.662 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:06:58.662 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:58.662 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:58.662 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:58.662 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:58.662 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:58.662 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:58.662 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.662 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.943 [2024-11-27 21:39:21.767509] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:58.943 "name": "raid_bdev1", 00:06:58.943 "aliases": [ 00:06:58.943 "02c92a1c-b4ea-4547-9f5f-4e8a198f1e32" 00:06:58.943 ], 00:06:58.943 "product_name": "Raid Volume", 00:06:58.943 "block_size": 512, 00:06:58.943 "num_blocks": 126976, 00:06:58.943 "uuid": "02c92a1c-b4ea-4547-9f5f-4e8a198f1e32", 00:06:58.943 "assigned_rate_limits": { 00:06:58.943 "rw_ios_per_sec": 0, 00:06:58.943 "rw_mbytes_per_sec": 0, 00:06:58.943 "r_mbytes_per_sec": 0, 00:06:58.943 "w_mbytes_per_sec": 0 00:06:58.943 }, 00:06:58.943 "claimed": false, 00:06:58.943 "zoned": false, 00:06:58.943 "supported_io_types": { 00:06:58.943 "read": true, 00:06:58.943 "write": true, 00:06:58.943 "unmap": true, 00:06:58.943 "flush": true, 00:06:58.943 "reset": true, 00:06:58.943 "nvme_admin": false, 00:06:58.943 "nvme_io": false, 00:06:58.943 "nvme_io_md": false, 00:06:58.943 "write_zeroes": true, 00:06:58.943 "zcopy": false, 00:06:58.943 "get_zone_info": false, 00:06:58.943 "zone_management": false, 00:06:58.943 "zone_append": false, 00:06:58.943 "compare": false, 00:06:58.943 "compare_and_write": false, 00:06:58.943 "abort": false, 00:06:58.943 "seek_hole": false, 00:06:58.943 "seek_data": false, 00:06:58.943 "copy": false, 00:06:58.943 "nvme_iov_md": false 00:06:58.943 }, 00:06:58.943 "memory_domains": [ 00:06:58.943 { 00:06:58.943 "dma_device_id": "system", 00:06:58.943 "dma_device_type": 1 00:06:58.943 }, 00:06:58.943 { 00:06:58.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.943 "dma_device_type": 2 00:06:58.943 }, 00:06:58.943 { 00:06:58.943 "dma_device_id": "system", 00:06:58.943 "dma_device_type": 1 00:06:58.943 }, 00:06:58.943 { 00:06:58.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:58.943 "dma_device_type": 2 00:06:58.943 } 00:06:58.943 ], 00:06:58.943 "driver_specific": { 00:06:58.943 "raid": { 00:06:58.943 "uuid": "02c92a1c-b4ea-4547-9f5f-4e8a198f1e32", 00:06:58.943 "strip_size_kb": 64, 00:06:58.943 "state": "online", 00:06:58.943 "raid_level": "concat", 00:06:58.943 "superblock": true, 00:06:58.943 "num_base_bdevs": 2, 00:06:58.943 "num_base_bdevs_discovered": 2, 00:06:58.943 "num_base_bdevs_operational": 2, 00:06:58.943 "base_bdevs_list": [ 00:06:58.943 { 00:06:58.943 "name": "pt1", 00:06:58.943 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:58.943 "is_configured": true, 00:06:58.943 "data_offset": 2048, 00:06:58.943 "data_size": 63488 00:06:58.943 }, 00:06:58.943 { 00:06:58.943 "name": "pt2", 00:06:58.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:58.943 "is_configured": true, 00:06:58.943 "data_offset": 2048, 00:06:58.943 "data_size": 63488 00:06:58.943 } 00:06:58.943 ] 00:06:58.943 } 00:06:58.943 } 00:06:58.943 }' 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:06:58.943 pt2' 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.943 21:39:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:06:58.943 [2024-11-27 21:39:21.994926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:58.943 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.943 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=02c92a1c-b4ea-4547-9f5f-4e8a198f1e32 00:06:58.943 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 02c92a1c-b4ea-4547-9f5f-4e8a198f1e32 ']' 00:06:58.943 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:06:58.943 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.943 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.943 [2024-11-27 21:39:22.042602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:58.943 [2024-11-27 21:39:22.042635] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:58.943 [2024-11-27 21:39:22.042716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.943 [2024-11-27 21:39:22.042782] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:58.943 [2024-11-27 21:39:22.042794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:06:58.943 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.943 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:06:58.944 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.944 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.944 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.204 [2024-11-27 21:39:22.178376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:59.204 [2024-11-27 21:39:22.180257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:59.204 [2024-11-27 21:39:22.180396] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:06:59.204 [2024-11-27 21:39:22.180551] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:06:59.204 [2024-11-27 21:39:22.180619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:06:59.204 [2024-11-27 21:39:22.180679] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:06:59.204 request: 00:06:59.204 { 00:06:59.204 "name": "raid_bdev1", 00:06:59.204 "raid_level": "concat", 00:06:59.204 "base_bdevs": [ 00:06:59.204 "malloc1", 00:06:59.204 "malloc2" 00:06:59.204 ], 00:06:59.204 "strip_size_kb": 64, 00:06:59.204 "superblock": false, 00:06:59.204 "method": "bdev_raid_create", 00:06:59.204 "req_id": 1 00:06:59.204 } 00:06:59.204 Got JSON-RPC error response 00:06:59.204 response: 00:06:59.204 { 00:06:59.204 "code": -17, 00:06:59.204 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:06:59.204 } 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.204 [2024-11-27 21:39:22.242241] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:06:59.204 [2024-11-27 21:39:22.242333] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.204 [2024-11-27 21:39:22.242375] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:06:59.204 [2024-11-27 21:39:22.242418] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.204 [2024-11-27 21:39:22.244503] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.204 [2024-11-27 21:39:22.244574] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:06:59.204 [2024-11-27 21:39:22.244701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:06:59.204 [2024-11-27 21:39:22.244792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:06:59.204 pt1 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:59.204 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.205 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.205 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.205 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.205 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.205 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.205 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.205 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:59.205 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.205 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.205 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.205 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.205 "name": "raid_bdev1", 00:06:59.205 "uuid": "02c92a1c-b4ea-4547-9f5f-4e8a198f1e32", 00:06:59.205 "strip_size_kb": 64, 00:06:59.205 "state": "configuring", 00:06:59.205 "raid_level": "concat", 00:06:59.205 "superblock": true, 00:06:59.205 "num_base_bdevs": 2, 00:06:59.205 "num_base_bdevs_discovered": 1, 00:06:59.205 "num_base_bdevs_operational": 2, 00:06:59.205 "base_bdevs_list": [ 00:06:59.205 { 00:06:59.205 "name": "pt1", 00:06:59.205 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:59.205 "is_configured": true, 00:06:59.205 "data_offset": 2048, 00:06:59.205 "data_size": 63488 00:06:59.205 }, 00:06:59.205 { 00:06:59.205 "name": null, 00:06:59.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:59.205 "is_configured": false, 00:06:59.205 "data_offset": 2048, 00:06:59.205 "data_size": 63488 00:06:59.205 } 00:06:59.205 ] 00:06:59.205 }' 00:06:59.205 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.205 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.774 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:06:59.774 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:06:59.774 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:59.774 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:06:59.774 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.774 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.774 [2024-11-27 21:39:22.681498] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:06:59.774 [2024-11-27 21:39:22.681566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:59.774 [2024-11-27 21:39:22.681589] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:59.774 [2024-11-27 21:39:22.681597] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:59.775 [2024-11-27 21:39:22.682013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:59.775 [2024-11-27 21:39:22.682032] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:06:59.775 [2024-11-27 21:39:22.682105] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:06:59.775 [2024-11-27 21:39:22.682127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:06:59.775 [2024-11-27 21:39:22.682217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:06:59.775 [2024-11-27 21:39:22.682225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:59.775 [2024-11-27 21:39:22.682464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:06:59.775 [2024-11-27 21:39:22.682567] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:06:59.775 [2024-11-27 21:39:22.682581] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:06:59.775 [2024-11-27 21:39:22.682718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.775 pt2 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.775 "name": "raid_bdev1", 00:06:59.775 "uuid": "02c92a1c-b4ea-4547-9f5f-4e8a198f1e32", 00:06:59.775 "strip_size_kb": 64, 00:06:59.775 "state": "online", 00:06:59.775 "raid_level": "concat", 00:06:59.775 "superblock": true, 00:06:59.775 "num_base_bdevs": 2, 00:06:59.775 "num_base_bdevs_discovered": 2, 00:06:59.775 "num_base_bdevs_operational": 2, 00:06:59.775 "base_bdevs_list": [ 00:06:59.775 { 00:06:59.775 "name": "pt1", 00:06:59.775 "uuid": "00000000-0000-0000-0000-000000000001", 00:06:59.775 "is_configured": true, 00:06:59.775 "data_offset": 2048, 00:06:59.775 "data_size": 63488 00:06:59.775 }, 00:06:59.775 { 00:06:59.775 "name": "pt2", 00:06:59.775 "uuid": "00000000-0000-0000-0000-000000000002", 00:06:59.775 "is_configured": true, 00:06:59.775 "data_offset": 2048, 00:06:59.775 "data_size": 63488 00:06:59.775 } 00:06:59.775 ] 00:06:59.775 }' 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.775 21:39:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.035 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:00.035 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:00.035 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:00.035 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:00.035 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:00.035 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:00.035 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:00.035 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:00.035 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.035 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.035 [2024-11-27 21:39:23.109075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.035 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.035 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:00.035 "name": "raid_bdev1", 00:07:00.035 "aliases": [ 00:07:00.035 "02c92a1c-b4ea-4547-9f5f-4e8a198f1e32" 00:07:00.035 ], 00:07:00.035 "product_name": "Raid Volume", 00:07:00.035 "block_size": 512, 00:07:00.035 "num_blocks": 126976, 00:07:00.035 "uuid": "02c92a1c-b4ea-4547-9f5f-4e8a198f1e32", 00:07:00.035 "assigned_rate_limits": { 00:07:00.035 "rw_ios_per_sec": 0, 00:07:00.035 "rw_mbytes_per_sec": 0, 00:07:00.035 "r_mbytes_per_sec": 0, 00:07:00.035 "w_mbytes_per_sec": 0 00:07:00.035 }, 00:07:00.035 "claimed": false, 00:07:00.035 "zoned": false, 00:07:00.035 "supported_io_types": { 00:07:00.035 "read": true, 00:07:00.035 "write": true, 00:07:00.035 "unmap": true, 00:07:00.035 "flush": true, 00:07:00.035 "reset": true, 00:07:00.035 "nvme_admin": false, 00:07:00.035 "nvme_io": false, 00:07:00.035 "nvme_io_md": false, 00:07:00.035 "write_zeroes": true, 00:07:00.035 "zcopy": false, 00:07:00.035 "get_zone_info": false, 00:07:00.035 "zone_management": false, 00:07:00.035 "zone_append": false, 00:07:00.035 "compare": false, 00:07:00.035 "compare_and_write": false, 00:07:00.035 "abort": false, 00:07:00.035 "seek_hole": false, 00:07:00.035 "seek_data": false, 00:07:00.035 "copy": false, 00:07:00.035 "nvme_iov_md": false 00:07:00.035 }, 00:07:00.035 "memory_domains": [ 00:07:00.035 { 00:07:00.035 "dma_device_id": "system", 00:07:00.035 "dma_device_type": 1 00:07:00.035 }, 00:07:00.035 { 00:07:00.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.035 "dma_device_type": 2 00:07:00.035 }, 00:07:00.035 { 00:07:00.035 "dma_device_id": "system", 00:07:00.035 "dma_device_type": 1 00:07:00.035 }, 00:07:00.035 { 00:07:00.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.035 "dma_device_type": 2 00:07:00.035 } 00:07:00.035 ], 00:07:00.035 "driver_specific": { 00:07:00.035 "raid": { 00:07:00.035 "uuid": "02c92a1c-b4ea-4547-9f5f-4e8a198f1e32", 00:07:00.035 "strip_size_kb": 64, 00:07:00.035 "state": "online", 00:07:00.035 "raid_level": "concat", 00:07:00.035 "superblock": true, 00:07:00.035 "num_base_bdevs": 2, 00:07:00.035 "num_base_bdevs_discovered": 2, 00:07:00.035 "num_base_bdevs_operational": 2, 00:07:00.035 "base_bdevs_list": [ 00:07:00.035 { 00:07:00.035 "name": "pt1", 00:07:00.035 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:00.035 "is_configured": true, 00:07:00.035 "data_offset": 2048, 00:07:00.035 "data_size": 63488 00:07:00.035 }, 00:07:00.035 { 00:07:00.035 "name": "pt2", 00:07:00.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:00.035 "is_configured": true, 00:07:00.035 "data_offset": 2048, 00:07:00.035 "data_size": 63488 00:07:00.035 } 00:07:00.035 ] 00:07:00.035 } 00:07:00.035 } 00:07:00.035 }' 00:07:00.035 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:00.295 pt2' 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.295 [2024-11-27 21:39:23.328628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 02c92a1c-b4ea-4547-9f5f-4e8a198f1e32 '!=' 02c92a1c-b4ea-4547-9f5f-4e8a198f1e32 ']' 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73269 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73269 ']' 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73269 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73269 00:07:00.295 killing process with pid 73269 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.295 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73269' 00:07:00.296 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73269 00:07:00.296 [2024-11-27 21:39:23.411257] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:00.296 [2024-11-27 21:39:23.411352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.296 [2024-11-27 21:39:23.411402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:00.296 [2024-11-27 21:39:23.411411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:00.296 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73269 00:07:00.555 [2024-11-27 21:39:23.434253] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:00.556 21:39:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:00.556 00:07:00.556 real 0m3.263s 00:07:00.556 user 0m5.098s 00:07:00.556 sys 0m0.656s 00:07:00.556 ************************************ 00:07:00.556 END TEST raid_superblock_test 00:07:00.556 ************************************ 00:07:00.556 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.556 21:39:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.816 21:39:23 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:00.816 21:39:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:00.816 21:39:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.816 21:39:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.816 ************************************ 00:07:00.816 START TEST raid_read_error_test 00:07:00.816 ************************************ 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UUpdZWhSGU 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73470 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73470 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73470 ']' 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.816 21:39:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.816 [2024-11-27 21:39:23.811609] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:00.816 [2024-11-27 21:39:23.811856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73470 ] 00:07:01.076 [2024-11-27 21:39:23.965835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.076 [2024-11-27 21:39:23.990762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.076 [2024-11-27 21:39:24.032176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.076 [2024-11-27 21:39:24.032230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.646 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.646 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.647 BaseBdev1_malloc 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.647 true 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.647 [2024-11-27 21:39:24.666900] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:01.647 [2024-11-27 21:39:24.666950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:01.647 [2024-11-27 21:39:24.666968] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:01.647 [2024-11-27 21:39:24.666976] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:01.647 [2024-11-27 21:39:24.669060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:01.647 [2024-11-27 21:39:24.669125] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:01.647 BaseBdev1 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.647 BaseBdev2_malloc 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.647 true 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.647 [2024-11-27 21:39:24.707200] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:01.647 [2024-11-27 21:39:24.707242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:01.647 [2024-11-27 21:39:24.707275] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:01.647 [2024-11-27 21:39:24.707290] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:01.647 [2024-11-27 21:39:24.709327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:01.647 [2024-11-27 21:39:24.709364] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:01.647 BaseBdev2 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.647 [2024-11-27 21:39:24.719214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:01.647 [2024-11-27 21:39:24.721082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:01.647 [2024-11-27 21:39:24.721247] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:01.647 [2024-11-27 21:39:24.721259] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:01.647 [2024-11-27 21:39:24.721494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:01.647 [2024-11-27 21:39:24.721666] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:01.647 [2024-11-27 21:39:24.721678] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:01.647 [2024-11-27 21:39:24.721794] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.647 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.906 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.906 "name": "raid_bdev1", 00:07:01.906 "uuid": "f1b4b495-4533-45eb-9f2c-8dd07a9885ad", 00:07:01.906 "strip_size_kb": 64, 00:07:01.906 "state": "online", 00:07:01.906 "raid_level": "concat", 00:07:01.906 "superblock": true, 00:07:01.906 "num_base_bdevs": 2, 00:07:01.906 "num_base_bdevs_discovered": 2, 00:07:01.906 "num_base_bdevs_operational": 2, 00:07:01.906 "base_bdevs_list": [ 00:07:01.906 { 00:07:01.906 "name": "BaseBdev1", 00:07:01.906 "uuid": "85c08b98-e69b-5c56-8549-78ee4e93da0b", 00:07:01.906 "is_configured": true, 00:07:01.906 "data_offset": 2048, 00:07:01.906 "data_size": 63488 00:07:01.906 }, 00:07:01.906 { 00:07:01.906 "name": "BaseBdev2", 00:07:01.906 "uuid": "9191e8a3-e0b4-5d13-a776-2c41c76946bb", 00:07:01.906 "is_configured": true, 00:07:01.906 "data_offset": 2048, 00:07:01.906 "data_size": 63488 00:07:01.906 } 00:07:01.906 ] 00:07:01.906 }' 00:07:01.906 21:39:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.906 21:39:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.166 21:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:02.166 21:39:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:02.166 [2024-11-27 21:39:25.186871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.105 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.105 "name": "raid_bdev1", 00:07:03.105 "uuid": "f1b4b495-4533-45eb-9f2c-8dd07a9885ad", 00:07:03.105 "strip_size_kb": 64, 00:07:03.105 "state": "online", 00:07:03.105 "raid_level": "concat", 00:07:03.105 "superblock": true, 00:07:03.105 "num_base_bdevs": 2, 00:07:03.105 "num_base_bdevs_discovered": 2, 00:07:03.105 "num_base_bdevs_operational": 2, 00:07:03.105 "base_bdevs_list": [ 00:07:03.105 { 00:07:03.105 "name": "BaseBdev1", 00:07:03.105 "uuid": "85c08b98-e69b-5c56-8549-78ee4e93da0b", 00:07:03.105 "is_configured": true, 00:07:03.105 "data_offset": 2048, 00:07:03.105 "data_size": 63488 00:07:03.105 }, 00:07:03.105 { 00:07:03.105 "name": "BaseBdev2", 00:07:03.105 "uuid": "9191e8a3-e0b4-5d13-a776-2c41c76946bb", 00:07:03.105 "is_configured": true, 00:07:03.105 "data_offset": 2048, 00:07:03.105 "data_size": 63488 00:07:03.105 } 00:07:03.105 ] 00:07:03.105 }' 00:07:03.106 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.106 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.675 [2024-11-27 21:39:26.602697] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:03.675 [2024-11-27 21:39:26.602728] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:03.675 [2024-11-27 21:39:26.605345] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.675 [2024-11-27 21:39:26.605451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:03.675 [2024-11-27 21:39:26.605530] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.675 [2024-11-27 21:39:26.605604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:03.675 { 00:07:03.675 "results": [ 00:07:03.675 { 00:07:03.675 "job": "raid_bdev1", 00:07:03.675 "core_mask": "0x1", 00:07:03.675 "workload": "randrw", 00:07:03.675 "percentage": 50, 00:07:03.675 "status": "finished", 00:07:03.675 "queue_depth": 1, 00:07:03.675 "io_size": 131072, 00:07:03.675 "runtime": 1.416691, 00:07:03.675 "iops": 16951.473539395676, 00:07:03.675 "mibps": 2118.9341924244595, 00:07:03.675 "io_failed": 1, 00:07:03.675 "io_timeout": 0, 00:07:03.675 "avg_latency_us": 81.19670088936341, 00:07:03.675 "min_latency_us": 25.152838427947597, 00:07:03.675 "max_latency_us": 1402.2986899563318 00:07:03.675 } 00:07:03.675 ], 00:07:03.675 "core_count": 1 00:07:03.675 } 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73470 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73470 ']' 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73470 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73470 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73470' 00:07:03.675 killing process with pid 73470 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73470 00:07:03.675 [2024-11-27 21:39:26.651956] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.675 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73470 00:07:03.675 [2024-11-27 21:39:26.667688] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:03.935 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UUpdZWhSGU 00:07:03.935 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:03.935 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:03.935 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:07:03.935 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:03.935 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:03.935 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:03.935 ************************************ 00:07:03.935 END TEST raid_read_error_test 00:07:03.935 ************************************ 00:07:03.935 21:39:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:07:03.935 00:07:03.935 real 0m3.166s 00:07:03.935 user 0m4.037s 00:07:03.935 sys 0m0.473s 00:07:03.935 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.935 21:39:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.935 21:39:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:03.935 21:39:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:03.935 21:39:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.935 21:39:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.935 ************************************ 00:07:03.935 START TEST raid_write_error_test 00:07:03.935 ************************************ 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Im9wWB1D5Y 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73599 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73599 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73599 ']' 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.935 21:39:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.935 [2024-11-27 21:39:27.051938] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:03.936 [2024-11-27 21:39:27.052133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73599 ] 00:07:04.195 [2024-11-27 21:39:27.184666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.195 [2024-11-27 21:39:27.210355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.195 [2024-11-27 21:39:27.252485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.195 [2024-11-27 21:39:27.252523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.135 BaseBdev1_malloc 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.135 true 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.135 [2024-11-27 21:39:27.919857] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:05.135 [2024-11-27 21:39:27.919901] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.135 [2024-11-27 21:39:27.919926] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:05.135 [2024-11-27 21:39:27.919943] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.135 [2024-11-27 21:39:27.922097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.135 [2024-11-27 21:39:27.922130] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:05.135 BaseBdev1 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.135 BaseBdev2_malloc 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.135 true 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.135 [2024-11-27 21:39:27.960382] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:05.135 [2024-11-27 21:39:27.960423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.135 [2024-11-27 21:39:27.960439] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:05.135 [2024-11-27 21:39:27.960455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.135 [2024-11-27 21:39:27.962495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.135 [2024-11-27 21:39:27.962530] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:05.135 BaseBdev2 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.135 [2024-11-27 21:39:27.972414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:05.135 [2024-11-27 21:39:27.974269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:05.135 [2024-11-27 21:39:27.974456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:05.135 [2024-11-27 21:39:27.974468] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:05.135 [2024-11-27 21:39:27.974748] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:05.135 [2024-11-27 21:39:27.974942] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:05.135 [2024-11-27 21:39:27.974966] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:05.135 [2024-11-27 21:39:27.975115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.135 21:39:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.135 21:39:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.135 "name": "raid_bdev1", 00:07:05.135 "uuid": "1ce4f4f9-3b0f-4606-8bfa-86cd87772efe", 00:07:05.135 "strip_size_kb": 64, 00:07:05.135 "state": "online", 00:07:05.135 "raid_level": "concat", 00:07:05.135 "superblock": true, 00:07:05.135 "num_base_bdevs": 2, 00:07:05.135 "num_base_bdevs_discovered": 2, 00:07:05.135 "num_base_bdevs_operational": 2, 00:07:05.135 "base_bdevs_list": [ 00:07:05.135 { 00:07:05.135 "name": "BaseBdev1", 00:07:05.135 "uuid": "5d680766-ca2c-51cd-9bd6-ed1bd6322782", 00:07:05.135 "is_configured": true, 00:07:05.135 "data_offset": 2048, 00:07:05.135 "data_size": 63488 00:07:05.135 }, 00:07:05.135 { 00:07:05.135 "name": "BaseBdev2", 00:07:05.135 "uuid": "8e97530c-ff03-5f47-9f1f-66d0a7a68ccc", 00:07:05.135 "is_configured": true, 00:07:05.135 "data_offset": 2048, 00:07:05.135 "data_size": 63488 00:07:05.135 } 00:07:05.135 ] 00:07:05.135 }' 00:07:05.135 21:39:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.135 21:39:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.394 21:39:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:05.394 21:39:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:05.653 [2024-11-27 21:39:28.515920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:06.598 "name": "raid_bdev1", 00:07:06.598 "uuid": "1ce4f4f9-3b0f-4606-8bfa-86cd87772efe", 00:07:06.598 "strip_size_kb": 64, 00:07:06.598 "state": "online", 00:07:06.598 "raid_level": "concat", 00:07:06.598 "superblock": true, 00:07:06.598 "num_base_bdevs": 2, 00:07:06.598 "num_base_bdevs_discovered": 2, 00:07:06.598 "num_base_bdevs_operational": 2, 00:07:06.598 "base_bdevs_list": [ 00:07:06.598 { 00:07:06.598 "name": "BaseBdev1", 00:07:06.598 "uuid": "5d680766-ca2c-51cd-9bd6-ed1bd6322782", 00:07:06.598 "is_configured": true, 00:07:06.598 "data_offset": 2048, 00:07:06.598 "data_size": 63488 00:07:06.598 }, 00:07:06.598 { 00:07:06.598 "name": "BaseBdev2", 00:07:06.598 "uuid": "8e97530c-ff03-5f47-9f1f-66d0a7a68ccc", 00:07:06.598 "is_configured": true, 00:07:06.598 "data_offset": 2048, 00:07:06.598 "data_size": 63488 00:07:06.598 } 00:07:06.598 ] 00:07:06.598 }' 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:06.598 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:06.955 [2024-11-27 21:39:29.887511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:06.955 [2024-11-27 21:39:29.887548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:06.955 [2024-11-27 21:39:29.890193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:06.955 [2024-11-27 21:39:29.890238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:06.955 [2024-11-27 21:39:29.890271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:06.955 [2024-11-27 21:39:29.890280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:06.955 { 00:07:06.955 "results": [ 00:07:06.955 { 00:07:06.955 "job": "raid_bdev1", 00:07:06.955 "core_mask": "0x1", 00:07:06.955 "workload": "randrw", 00:07:06.955 "percentage": 50, 00:07:06.955 "status": "finished", 00:07:06.955 "queue_depth": 1, 00:07:06.955 "io_size": 131072, 00:07:06.955 "runtime": 1.372412, 00:07:06.955 "iops": 17033.514717154907, 00:07:06.955 "mibps": 2129.1893396443634, 00:07:06.955 "io_failed": 1, 00:07:06.955 "io_timeout": 0, 00:07:06.955 "avg_latency_us": 80.85755898596113, 00:07:06.955 "min_latency_us": 25.152838427947597, 00:07:06.955 "max_latency_us": 1409.4532751091704 00:07:06.955 } 00:07:06.955 ], 00:07:06.955 "core_count": 1 00:07:06.955 } 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73599 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73599 ']' 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73599 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73599 00:07:06.955 killing process with pid 73599 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73599' 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73599 00:07:06.955 [2024-11-27 21:39:29.939671] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.955 21:39:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73599 00:07:06.955 [2024-11-27 21:39:29.955222] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:07.213 21:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Im9wWB1D5Y 00:07:07.213 21:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:07.213 21:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:07.213 21:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:07.213 21:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:07.213 21:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:07.213 21:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:07.213 21:39:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:07.213 00:07:07.213 real 0m3.210s 00:07:07.213 user 0m4.121s 00:07:07.213 sys 0m0.508s 00:07:07.213 21:39:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.213 21:39:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.213 ************************************ 00:07:07.213 END TEST raid_write_error_test 00:07:07.213 ************************************ 00:07:07.213 21:39:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:07.213 21:39:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:07.213 21:39:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:07.213 21:39:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.213 21:39:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:07.213 ************************************ 00:07:07.213 START TEST raid_state_function_test 00:07:07.213 ************************************ 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73726 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73726' 00:07:07.213 Process raid pid: 73726 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73726 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73726 ']' 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.213 21:39:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.213 [2024-11-27 21:39:30.323639] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:07.213 [2024-11-27 21:39:30.323769] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.472 [2024-11-27 21:39:30.477944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.472 [2024-11-27 21:39:30.502748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.472 [2024-11-27 21:39:30.543983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.472 [2024-11-27 21:39:30.544040] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.039 [2024-11-27 21:39:31.150117] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:08.039 [2024-11-27 21:39:31.150185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:08.039 [2024-11-27 21:39:31.150196] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:08.039 [2024-11-27 21:39:31.150205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.039 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.303 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.303 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.303 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.303 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.303 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.303 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.303 "name": "Existed_Raid", 00:07:08.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.303 "strip_size_kb": 0, 00:07:08.303 "state": "configuring", 00:07:08.303 "raid_level": "raid1", 00:07:08.303 "superblock": false, 00:07:08.303 "num_base_bdevs": 2, 00:07:08.303 "num_base_bdevs_discovered": 0, 00:07:08.303 "num_base_bdevs_operational": 2, 00:07:08.303 "base_bdevs_list": [ 00:07:08.303 { 00:07:08.303 "name": "BaseBdev1", 00:07:08.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.303 "is_configured": false, 00:07:08.303 "data_offset": 0, 00:07:08.303 "data_size": 0 00:07:08.303 }, 00:07:08.303 { 00:07:08.303 "name": "BaseBdev2", 00:07:08.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.303 "is_configured": false, 00:07:08.303 "data_offset": 0, 00:07:08.303 "data_size": 0 00:07:08.303 } 00:07:08.303 ] 00:07:08.303 }' 00:07:08.303 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.303 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.631 [2024-11-27 21:39:31.597319] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:08.631 [2024-11-27 21:39:31.597365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.631 [2024-11-27 21:39:31.609284] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:08.631 [2024-11-27 21:39:31.609327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:08.631 [2024-11-27 21:39:31.609337] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:08.631 [2024-11-27 21:39:31.609355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.631 [2024-11-27 21:39:31.630009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:08.631 BaseBdev1 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.631 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.632 [ 00:07:08.632 { 00:07:08.632 "name": "BaseBdev1", 00:07:08.632 "aliases": [ 00:07:08.632 "0412fb0a-25a9-4d1d-9ae1-9b27ca9dcee0" 00:07:08.632 ], 00:07:08.632 "product_name": "Malloc disk", 00:07:08.632 "block_size": 512, 00:07:08.632 "num_blocks": 65536, 00:07:08.632 "uuid": "0412fb0a-25a9-4d1d-9ae1-9b27ca9dcee0", 00:07:08.632 "assigned_rate_limits": { 00:07:08.632 "rw_ios_per_sec": 0, 00:07:08.632 "rw_mbytes_per_sec": 0, 00:07:08.632 "r_mbytes_per_sec": 0, 00:07:08.632 "w_mbytes_per_sec": 0 00:07:08.632 }, 00:07:08.632 "claimed": true, 00:07:08.632 "claim_type": "exclusive_write", 00:07:08.632 "zoned": false, 00:07:08.632 "supported_io_types": { 00:07:08.632 "read": true, 00:07:08.632 "write": true, 00:07:08.632 "unmap": true, 00:07:08.632 "flush": true, 00:07:08.632 "reset": true, 00:07:08.632 "nvme_admin": false, 00:07:08.632 "nvme_io": false, 00:07:08.632 "nvme_io_md": false, 00:07:08.632 "write_zeroes": true, 00:07:08.632 "zcopy": true, 00:07:08.632 "get_zone_info": false, 00:07:08.632 "zone_management": false, 00:07:08.632 "zone_append": false, 00:07:08.632 "compare": false, 00:07:08.632 "compare_and_write": false, 00:07:08.632 "abort": true, 00:07:08.632 "seek_hole": false, 00:07:08.632 "seek_data": false, 00:07:08.632 "copy": true, 00:07:08.632 "nvme_iov_md": false 00:07:08.632 }, 00:07:08.632 "memory_domains": [ 00:07:08.632 { 00:07:08.632 "dma_device_id": "system", 00:07:08.632 "dma_device_type": 1 00:07:08.632 }, 00:07:08.632 { 00:07:08.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.632 "dma_device_type": 2 00:07:08.632 } 00:07:08.632 ], 00:07:08.632 "driver_specific": {} 00:07:08.632 } 00:07:08.632 ] 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.632 "name": "Existed_Raid", 00:07:08.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.632 "strip_size_kb": 0, 00:07:08.632 "state": "configuring", 00:07:08.632 "raid_level": "raid1", 00:07:08.632 "superblock": false, 00:07:08.632 "num_base_bdevs": 2, 00:07:08.632 "num_base_bdevs_discovered": 1, 00:07:08.632 "num_base_bdevs_operational": 2, 00:07:08.632 "base_bdevs_list": [ 00:07:08.632 { 00:07:08.632 "name": "BaseBdev1", 00:07:08.632 "uuid": "0412fb0a-25a9-4d1d-9ae1-9b27ca9dcee0", 00:07:08.632 "is_configured": true, 00:07:08.632 "data_offset": 0, 00:07:08.632 "data_size": 65536 00:07:08.632 }, 00:07:08.632 { 00:07:08.632 "name": "BaseBdev2", 00:07:08.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:08.632 "is_configured": false, 00:07:08.632 "data_offset": 0, 00:07:08.632 "data_size": 0 00:07:08.632 } 00:07:08.632 ] 00:07:08.632 }' 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.632 21:39:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.201 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:09.201 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.201 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.201 [2024-11-27 21:39:32.141216] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:09.201 [2024-11-27 21:39:32.141268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:09.201 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.201 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:09.201 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.201 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.202 [2024-11-27 21:39:32.153208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.202 [2024-11-27 21:39:32.155065] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:09.202 [2024-11-27 21:39:32.155103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.202 "name": "Existed_Raid", 00:07:09.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.202 "strip_size_kb": 0, 00:07:09.202 "state": "configuring", 00:07:09.202 "raid_level": "raid1", 00:07:09.202 "superblock": false, 00:07:09.202 "num_base_bdevs": 2, 00:07:09.202 "num_base_bdevs_discovered": 1, 00:07:09.202 "num_base_bdevs_operational": 2, 00:07:09.202 "base_bdevs_list": [ 00:07:09.202 { 00:07:09.202 "name": "BaseBdev1", 00:07:09.202 "uuid": "0412fb0a-25a9-4d1d-9ae1-9b27ca9dcee0", 00:07:09.202 "is_configured": true, 00:07:09.202 "data_offset": 0, 00:07:09.202 "data_size": 65536 00:07:09.202 }, 00:07:09.202 { 00:07:09.202 "name": "BaseBdev2", 00:07:09.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:09.202 "is_configured": false, 00:07:09.202 "data_offset": 0, 00:07:09.202 "data_size": 0 00:07:09.202 } 00:07:09.202 ] 00:07:09.202 }' 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.202 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.771 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:09.771 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.772 [2024-11-27 21:39:32.639306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:09.772 [2024-11-27 21:39:32.639359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:09.772 [2024-11-27 21:39:32.639368] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:09.772 [2024-11-27 21:39:32.639641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:09.772 [2024-11-27 21:39:32.639869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:09.772 [2024-11-27 21:39:32.639895] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:09.772 [2024-11-27 21:39:32.640130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.772 BaseBdev2 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.772 [ 00:07:09.772 { 00:07:09.772 "name": "BaseBdev2", 00:07:09.772 "aliases": [ 00:07:09.772 "baa81b6f-eee7-4c79-8fd6-8569ab523ed2" 00:07:09.772 ], 00:07:09.772 "product_name": "Malloc disk", 00:07:09.772 "block_size": 512, 00:07:09.772 "num_blocks": 65536, 00:07:09.772 "uuid": "baa81b6f-eee7-4c79-8fd6-8569ab523ed2", 00:07:09.772 "assigned_rate_limits": { 00:07:09.772 "rw_ios_per_sec": 0, 00:07:09.772 "rw_mbytes_per_sec": 0, 00:07:09.772 "r_mbytes_per_sec": 0, 00:07:09.772 "w_mbytes_per_sec": 0 00:07:09.772 }, 00:07:09.772 "claimed": true, 00:07:09.772 "claim_type": "exclusive_write", 00:07:09.772 "zoned": false, 00:07:09.772 "supported_io_types": { 00:07:09.772 "read": true, 00:07:09.772 "write": true, 00:07:09.772 "unmap": true, 00:07:09.772 "flush": true, 00:07:09.772 "reset": true, 00:07:09.772 "nvme_admin": false, 00:07:09.772 "nvme_io": false, 00:07:09.772 "nvme_io_md": false, 00:07:09.772 "write_zeroes": true, 00:07:09.772 "zcopy": true, 00:07:09.772 "get_zone_info": false, 00:07:09.772 "zone_management": false, 00:07:09.772 "zone_append": false, 00:07:09.772 "compare": false, 00:07:09.772 "compare_and_write": false, 00:07:09.772 "abort": true, 00:07:09.772 "seek_hole": false, 00:07:09.772 "seek_data": false, 00:07:09.772 "copy": true, 00:07:09.772 "nvme_iov_md": false 00:07:09.772 }, 00:07:09.772 "memory_domains": [ 00:07:09.772 { 00:07:09.772 "dma_device_id": "system", 00:07:09.772 "dma_device_type": 1 00:07:09.772 }, 00:07:09.772 { 00:07:09.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.772 "dma_device_type": 2 00:07:09.772 } 00:07:09.772 ], 00:07:09.772 "driver_specific": {} 00:07:09.772 } 00:07:09.772 ] 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.772 "name": "Existed_Raid", 00:07:09.772 "uuid": "5a1e72ac-effd-48a2-979f-d784fe0c0b4b", 00:07:09.772 "strip_size_kb": 0, 00:07:09.772 "state": "online", 00:07:09.772 "raid_level": "raid1", 00:07:09.772 "superblock": false, 00:07:09.772 "num_base_bdevs": 2, 00:07:09.772 "num_base_bdevs_discovered": 2, 00:07:09.772 "num_base_bdevs_operational": 2, 00:07:09.772 "base_bdevs_list": [ 00:07:09.772 { 00:07:09.772 "name": "BaseBdev1", 00:07:09.772 "uuid": "0412fb0a-25a9-4d1d-9ae1-9b27ca9dcee0", 00:07:09.772 "is_configured": true, 00:07:09.772 "data_offset": 0, 00:07:09.772 "data_size": 65536 00:07:09.772 }, 00:07:09.772 { 00:07:09.772 "name": "BaseBdev2", 00:07:09.772 "uuid": "baa81b6f-eee7-4c79-8fd6-8569ab523ed2", 00:07:09.772 "is_configured": true, 00:07:09.772 "data_offset": 0, 00:07:09.772 "data_size": 65536 00:07:09.772 } 00:07:09.772 ] 00:07:09.772 }' 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.772 21:39:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.032 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:10.032 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:10.032 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:10.032 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:10.032 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:10.032 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:10.032 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:10.032 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.032 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.032 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:10.032 [2024-11-27 21:39:33.062925] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.032 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.032 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:10.032 "name": "Existed_Raid", 00:07:10.032 "aliases": [ 00:07:10.032 "5a1e72ac-effd-48a2-979f-d784fe0c0b4b" 00:07:10.032 ], 00:07:10.032 "product_name": "Raid Volume", 00:07:10.032 "block_size": 512, 00:07:10.032 "num_blocks": 65536, 00:07:10.032 "uuid": "5a1e72ac-effd-48a2-979f-d784fe0c0b4b", 00:07:10.032 "assigned_rate_limits": { 00:07:10.032 "rw_ios_per_sec": 0, 00:07:10.032 "rw_mbytes_per_sec": 0, 00:07:10.032 "r_mbytes_per_sec": 0, 00:07:10.032 "w_mbytes_per_sec": 0 00:07:10.032 }, 00:07:10.032 "claimed": false, 00:07:10.032 "zoned": false, 00:07:10.032 "supported_io_types": { 00:07:10.032 "read": true, 00:07:10.032 "write": true, 00:07:10.032 "unmap": false, 00:07:10.032 "flush": false, 00:07:10.032 "reset": true, 00:07:10.032 "nvme_admin": false, 00:07:10.032 "nvme_io": false, 00:07:10.032 "nvme_io_md": false, 00:07:10.032 "write_zeroes": true, 00:07:10.032 "zcopy": false, 00:07:10.032 "get_zone_info": false, 00:07:10.032 "zone_management": false, 00:07:10.032 "zone_append": false, 00:07:10.032 "compare": false, 00:07:10.032 "compare_and_write": false, 00:07:10.032 "abort": false, 00:07:10.032 "seek_hole": false, 00:07:10.032 "seek_data": false, 00:07:10.032 "copy": false, 00:07:10.032 "nvme_iov_md": false 00:07:10.032 }, 00:07:10.032 "memory_domains": [ 00:07:10.032 { 00:07:10.032 "dma_device_id": "system", 00:07:10.032 "dma_device_type": 1 00:07:10.032 }, 00:07:10.032 { 00:07:10.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.032 "dma_device_type": 2 00:07:10.032 }, 00:07:10.032 { 00:07:10.032 "dma_device_id": "system", 00:07:10.032 "dma_device_type": 1 00:07:10.032 }, 00:07:10.032 { 00:07:10.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.032 "dma_device_type": 2 00:07:10.032 } 00:07:10.032 ], 00:07:10.032 "driver_specific": { 00:07:10.032 "raid": { 00:07:10.032 "uuid": "5a1e72ac-effd-48a2-979f-d784fe0c0b4b", 00:07:10.033 "strip_size_kb": 0, 00:07:10.033 "state": "online", 00:07:10.033 "raid_level": "raid1", 00:07:10.033 "superblock": false, 00:07:10.033 "num_base_bdevs": 2, 00:07:10.033 "num_base_bdevs_discovered": 2, 00:07:10.033 "num_base_bdevs_operational": 2, 00:07:10.033 "base_bdevs_list": [ 00:07:10.033 { 00:07:10.033 "name": "BaseBdev1", 00:07:10.033 "uuid": "0412fb0a-25a9-4d1d-9ae1-9b27ca9dcee0", 00:07:10.033 "is_configured": true, 00:07:10.033 "data_offset": 0, 00:07:10.033 "data_size": 65536 00:07:10.033 }, 00:07:10.033 { 00:07:10.033 "name": "BaseBdev2", 00:07:10.033 "uuid": "baa81b6f-eee7-4c79-8fd6-8569ab523ed2", 00:07:10.033 "is_configured": true, 00:07:10.033 "data_offset": 0, 00:07:10.033 "data_size": 65536 00:07:10.033 } 00:07:10.033 ] 00:07:10.033 } 00:07:10.033 } 00:07:10.033 }' 00:07:10.033 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:10.033 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:10.033 BaseBdev2' 00:07:10.033 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.292 [2024-11-27 21:39:33.286299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:10.292 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.293 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.293 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.293 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.293 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.293 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:10.293 "name": "Existed_Raid", 00:07:10.293 "uuid": "5a1e72ac-effd-48a2-979f-d784fe0c0b4b", 00:07:10.293 "strip_size_kb": 0, 00:07:10.293 "state": "online", 00:07:10.293 "raid_level": "raid1", 00:07:10.293 "superblock": false, 00:07:10.293 "num_base_bdevs": 2, 00:07:10.293 "num_base_bdevs_discovered": 1, 00:07:10.293 "num_base_bdevs_operational": 1, 00:07:10.293 "base_bdevs_list": [ 00:07:10.293 { 00:07:10.293 "name": null, 00:07:10.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.293 "is_configured": false, 00:07:10.293 "data_offset": 0, 00:07:10.293 "data_size": 65536 00:07:10.293 }, 00:07:10.293 { 00:07:10.293 "name": "BaseBdev2", 00:07:10.293 "uuid": "baa81b6f-eee7-4c79-8fd6-8569ab523ed2", 00:07:10.293 "is_configured": true, 00:07:10.293 "data_offset": 0, 00:07:10.293 "data_size": 65536 00:07:10.293 } 00:07:10.293 ] 00:07:10.293 }' 00:07:10.293 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:10.293 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.862 [2024-11-27 21:39:33.772606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:10.862 [2024-11-27 21:39:33.772702] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:10.862 [2024-11-27 21:39:33.784277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.862 [2024-11-27 21:39:33.784326] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.862 [2024-11-27 21:39:33.784345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:10.862 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:10.863 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:10.863 21:39:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73726 00:07:10.863 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73726 ']' 00:07:10.863 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73726 00:07:10.863 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:10.863 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.863 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73726 00:07:10.863 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.863 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.863 killing process with pid 73726 00:07:10.863 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73726' 00:07:10.863 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73726 00:07:10.863 [2024-11-27 21:39:33.877872] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.863 21:39:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73726 00:07:10.863 [2024-11-27 21:39:33.878921] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:11.123 00:07:11.123 real 0m3.847s 00:07:11.123 user 0m6.120s 00:07:11.123 sys 0m0.751s 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.123 ************************************ 00:07:11.123 END TEST raid_state_function_test 00:07:11.123 ************************************ 00:07:11.123 21:39:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:11.123 21:39:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:11.123 21:39:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.123 21:39:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.123 ************************************ 00:07:11.123 START TEST raid_state_function_test_sb 00:07:11.123 ************************************ 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73968 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73968' 00:07:11.123 Process raid pid: 73968 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73968 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73968 ']' 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.123 21:39:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.384 [2024-11-27 21:39:34.244776] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:11.384 [2024-11-27 21:39:34.244906] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.384 [2024-11-27 21:39:34.395236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.384 [2024-11-27 21:39:34.423086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.384 [2024-11-27 21:39:34.465442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.384 [2024-11-27 21:39:34.465472] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.954 [2024-11-27 21:39:35.064371] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:11.954 [2024-11-27 21:39:35.064431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:11.954 [2024-11-27 21:39:35.064442] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:11.954 [2024-11-27 21:39:35.064452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.954 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.214 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.214 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.214 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.214 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.214 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.214 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.214 "name": "Existed_Raid", 00:07:12.214 "uuid": "dadff693-6cf0-412b-838b-9cdc22d0f1dd", 00:07:12.214 "strip_size_kb": 0, 00:07:12.214 "state": "configuring", 00:07:12.214 "raid_level": "raid1", 00:07:12.214 "superblock": true, 00:07:12.214 "num_base_bdevs": 2, 00:07:12.214 "num_base_bdevs_discovered": 0, 00:07:12.214 "num_base_bdevs_operational": 2, 00:07:12.214 "base_bdevs_list": [ 00:07:12.214 { 00:07:12.214 "name": "BaseBdev1", 00:07:12.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.214 "is_configured": false, 00:07:12.214 "data_offset": 0, 00:07:12.214 "data_size": 0 00:07:12.214 }, 00:07:12.214 { 00:07:12.214 "name": "BaseBdev2", 00:07:12.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.214 "is_configured": false, 00:07:12.214 "data_offset": 0, 00:07:12.214 "data_size": 0 00:07:12.214 } 00:07:12.214 ] 00:07:12.214 }' 00:07:12.214 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.214 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.475 [2024-11-27 21:39:35.459681] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:12.475 [2024-11-27 21:39:35.459809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.475 [2024-11-27 21:39:35.467670] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:12.475 [2024-11-27 21:39:35.467762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:12.475 [2024-11-27 21:39:35.467815] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:12.475 [2024-11-27 21:39:35.467878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.475 BaseBdev1 00:07:12.475 [2024-11-27 21:39:35.484469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.475 [ 00:07:12.475 { 00:07:12.475 "name": "BaseBdev1", 00:07:12.475 "aliases": [ 00:07:12.475 "70061f6e-e75e-4bae-8dde-832f0e4f5ca4" 00:07:12.475 ], 00:07:12.475 "product_name": "Malloc disk", 00:07:12.475 "block_size": 512, 00:07:12.475 "num_blocks": 65536, 00:07:12.475 "uuid": "70061f6e-e75e-4bae-8dde-832f0e4f5ca4", 00:07:12.475 "assigned_rate_limits": { 00:07:12.475 "rw_ios_per_sec": 0, 00:07:12.475 "rw_mbytes_per_sec": 0, 00:07:12.475 "r_mbytes_per_sec": 0, 00:07:12.475 "w_mbytes_per_sec": 0 00:07:12.475 }, 00:07:12.475 "claimed": true, 00:07:12.475 "claim_type": "exclusive_write", 00:07:12.475 "zoned": false, 00:07:12.475 "supported_io_types": { 00:07:12.475 "read": true, 00:07:12.475 "write": true, 00:07:12.475 "unmap": true, 00:07:12.475 "flush": true, 00:07:12.475 "reset": true, 00:07:12.475 "nvme_admin": false, 00:07:12.475 "nvme_io": false, 00:07:12.475 "nvme_io_md": false, 00:07:12.475 "write_zeroes": true, 00:07:12.475 "zcopy": true, 00:07:12.475 "get_zone_info": false, 00:07:12.475 "zone_management": false, 00:07:12.475 "zone_append": false, 00:07:12.475 "compare": false, 00:07:12.475 "compare_and_write": false, 00:07:12.475 "abort": true, 00:07:12.475 "seek_hole": false, 00:07:12.475 "seek_data": false, 00:07:12.475 "copy": true, 00:07:12.475 "nvme_iov_md": false 00:07:12.475 }, 00:07:12.475 "memory_domains": [ 00:07:12.475 { 00:07:12.475 "dma_device_id": "system", 00:07:12.475 "dma_device_type": 1 00:07:12.475 }, 00:07:12.475 { 00:07:12.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:12.475 "dma_device_type": 2 00:07:12.475 } 00:07:12.475 ], 00:07:12.475 "driver_specific": {} 00:07:12.475 } 00:07:12.475 ] 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.475 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.475 "name": "Existed_Raid", 00:07:12.475 "uuid": "ce32b418-8e57-4072-b6c6-bf22bd8056d7", 00:07:12.475 "strip_size_kb": 0, 00:07:12.475 "state": "configuring", 00:07:12.475 "raid_level": "raid1", 00:07:12.475 "superblock": true, 00:07:12.475 "num_base_bdevs": 2, 00:07:12.475 "num_base_bdevs_discovered": 1, 00:07:12.475 "num_base_bdevs_operational": 2, 00:07:12.475 "base_bdevs_list": [ 00:07:12.475 { 00:07:12.475 "name": "BaseBdev1", 00:07:12.475 "uuid": "70061f6e-e75e-4bae-8dde-832f0e4f5ca4", 00:07:12.475 "is_configured": true, 00:07:12.475 "data_offset": 2048, 00:07:12.475 "data_size": 63488 00:07:12.475 }, 00:07:12.475 { 00:07:12.475 "name": "BaseBdev2", 00:07:12.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:12.476 "is_configured": false, 00:07:12.476 "data_offset": 0, 00:07:12.476 "data_size": 0 00:07:12.476 } 00:07:12.476 ] 00:07:12.476 }' 00:07:12.476 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.476 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.046 [2024-11-27 21:39:35.975768] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:13.046 [2024-11-27 21:39:35.975837] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.046 [2024-11-27 21:39:35.983790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:13.046 [2024-11-27 21:39:35.985726] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:13.046 [2024-11-27 21:39:35.985825] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.046 21:39:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.046 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.046 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.046 "name": "Existed_Raid", 00:07:13.046 "uuid": "89539ce0-b8fc-43db-a515-c0019dd7da62", 00:07:13.046 "strip_size_kb": 0, 00:07:13.046 "state": "configuring", 00:07:13.046 "raid_level": "raid1", 00:07:13.046 "superblock": true, 00:07:13.046 "num_base_bdevs": 2, 00:07:13.046 "num_base_bdevs_discovered": 1, 00:07:13.046 "num_base_bdevs_operational": 2, 00:07:13.046 "base_bdevs_list": [ 00:07:13.046 { 00:07:13.046 "name": "BaseBdev1", 00:07:13.046 "uuid": "70061f6e-e75e-4bae-8dde-832f0e4f5ca4", 00:07:13.046 "is_configured": true, 00:07:13.046 "data_offset": 2048, 00:07:13.046 "data_size": 63488 00:07:13.046 }, 00:07:13.046 { 00:07:13.046 "name": "BaseBdev2", 00:07:13.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:13.046 "is_configured": false, 00:07:13.046 "data_offset": 0, 00:07:13.046 "data_size": 0 00:07:13.046 } 00:07:13.046 ] 00:07:13.046 }' 00:07:13.046 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.046 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.616 [2024-11-27 21:39:36.477736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:13.616 [2024-11-27 21:39:36.478078] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:13.616 BaseBdev2 00:07:13.616 [2024-11-27 21:39:36.478141] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:13.616 [2024-11-27 21:39:36.478441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:13.616 [2024-11-27 21:39:36.478598] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:13.616 [2024-11-27 21:39:36.478613] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:13.616 [2024-11-27 21:39:36.478717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.616 [ 00:07:13.616 { 00:07:13.616 "name": "BaseBdev2", 00:07:13.616 "aliases": [ 00:07:13.616 "5c78e913-dd51-4a22-963a-d4d84c2b7262" 00:07:13.616 ], 00:07:13.616 "product_name": "Malloc disk", 00:07:13.616 "block_size": 512, 00:07:13.616 "num_blocks": 65536, 00:07:13.616 "uuid": "5c78e913-dd51-4a22-963a-d4d84c2b7262", 00:07:13.616 "assigned_rate_limits": { 00:07:13.616 "rw_ios_per_sec": 0, 00:07:13.616 "rw_mbytes_per_sec": 0, 00:07:13.616 "r_mbytes_per_sec": 0, 00:07:13.616 "w_mbytes_per_sec": 0 00:07:13.616 }, 00:07:13.616 "claimed": true, 00:07:13.616 "claim_type": "exclusive_write", 00:07:13.616 "zoned": false, 00:07:13.616 "supported_io_types": { 00:07:13.616 "read": true, 00:07:13.616 "write": true, 00:07:13.616 "unmap": true, 00:07:13.616 "flush": true, 00:07:13.616 "reset": true, 00:07:13.616 "nvme_admin": false, 00:07:13.616 "nvme_io": false, 00:07:13.616 "nvme_io_md": false, 00:07:13.616 "write_zeroes": true, 00:07:13.616 "zcopy": true, 00:07:13.616 "get_zone_info": false, 00:07:13.616 "zone_management": false, 00:07:13.616 "zone_append": false, 00:07:13.616 "compare": false, 00:07:13.616 "compare_and_write": false, 00:07:13.616 "abort": true, 00:07:13.616 "seek_hole": false, 00:07:13.616 "seek_data": false, 00:07:13.616 "copy": true, 00:07:13.616 "nvme_iov_md": false 00:07:13.616 }, 00:07:13.616 "memory_domains": [ 00:07:13.616 { 00:07:13.616 "dma_device_id": "system", 00:07:13.616 "dma_device_type": 1 00:07:13.616 }, 00:07:13.616 { 00:07:13.616 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.616 "dma_device_type": 2 00:07:13.616 } 00:07:13.616 ], 00:07:13.616 "driver_specific": {} 00:07:13.616 } 00:07:13.616 ] 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.616 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.617 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:13.617 "name": "Existed_Raid", 00:07:13.617 "uuid": "89539ce0-b8fc-43db-a515-c0019dd7da62", 00:07:13.617 "strip_size_kb": 0, 00:07:13.617 "state": "online", 00:07:13.617 "raid_level": "raid1", 00:07:13.617 "superblock": true, 00:07:13.617 "num_base_bdevs": 2, 00:07:13.617 "num_base_bdevs_discovered": 2, 00:07:13.617 "num_base_bdevs_operational": 2, 00:07:13.617 "base_bdevs_list": [ 00:07:13.617 { 00:07:13.617 "name": "BaseBdev1", 00:07:13.617 "uuid": "70061f6e-e75e-4bae-8dde-832f0e4f5ca4", 00:07:13.617 "is_configured": true, 00:07:13.617 "data_offset": 2048, 00:07:13.617 "data_size": 63488 00:07:13.617 }, 00:07:13.617 { 00:07:13.617 "name": "BaseBdev2", 00:07:13.617 "uuid": "5c78e913-dd51-4a22-963a-d4d84c2b7262", 00:07:13.617 "is_configured": true, 00:07:13.617 "data_offset": 2048, 00:07:13.617 "data_size": 63488 00:07:13.617 } 00:07:13.617 ] 00:07:13.617 }' 00:07:13.617 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:13.617 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.877 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:13.877 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:13.877 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:13.877 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:13.877 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:13.877 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:13.877 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:13.877 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:13.877 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.877 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:13.877 [2024-11-27 21:39:36.937307] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.877 21:39:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.877 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:13.877 "name": "Existed_Raid", 00:07:13.877 "aliases": [ 00:07:13.877 "89539ce0-b8fc-43db-a515-c0019dd7da62" 00:07:13.877 ], 00:07:13.877 "product_name": "Raid Volume", 00:07:13.877 "block_size": 512, 00:07:13.877 "num_blocks": 63488, 00:07:13.877 "uuid": "89539ce0-b8fc-43db-a515-c0019dd7da62", 00:07:13.877 "assigned_rate_limits": { 00:07:13.877 "rw_ios_per_sec": 0, 00:07:13.877 "rw_mbytes_per_sec": 0, 00:07:13.877 "r_mbytes_per_sec": 0, 00:07:13.877 "w_mbytes_per_sec": 0 00:07:13.877 }, 00:07:13.877 "claimed": false, 00:07:13.877 "zoned": false, 00:07:13.877 "supported_io_types": { 00:07:13.877 "read": true, 00:07:13.877 "write": true, 00:07:13.877 "unmap": false, 00:07:13.877 "flush": false, 00:07:13.877 "reset": true, 00:07:13.877 "nvme_admin": false, 00:07:13.877 "nvme_io": false, 00:07:13.877 "nvme_io_md": false, 00:07:13.877 "write_zeroes": true, 00:07:13.877 "zcopy": false, 00:07:13.877 "get_zone_info": false, 00:07:13.877 "zone_management": false, 00:07:13.877 "zone_append": false, 00:07:13.877 "compare": false, 00:07:13.877 "compare_and_write": false, 00:07:13.877 "abort": false, 00:07:13.877 "seek_hole": false, 00:07:13.877 "seek_data": false, 00:07:13.877 "copy": false, 00:07:13.877 "nvme_iov_md": false 00:07:13.877 }, 00:07:13.877 "memory_domains": [ 00:07:13.877 { 00:07:13.877 "dma_device_id": "system", 00:07:13.877 "dma_device_type": 1 00:07:13.877 }, 00:07:13.877 { 00:07:13.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.877 "dma_device_type": 2 00:07:13.877 }, 00:07:13.877 { 00:07:13.877 "dma_device_id": "system", 00:07:13.877 "dma_device_type": 1 00:07:13.877 }, 00:07:13.877 { 00:07:13.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:13.877 "dma_device_type": 2 00:07:13.877 } 00:07:13.877 ], 00:07:13.877 "driver_specific": { 00:07:13.877 "raid": { 00:07:13.877 "uuid": "89539ce0-b8fc-43db-a515-c0019dd7da62", 00:07:13.877 "strip_size_kb": 0, 00:07:13.877 "state": "online", 00:07:13.877 "raid_level": "raid1", 00:07:13.877 "superblock": true, 00:07:13.877 "num_base_bdevs": 2, 00:07:13.877 "num_base_bdevs_discovered": 2, 00:07:13.877 "num_base_bdevs_operational": 2, 00:07:13.877 "base_bdevs_list": [ 00:07:13.877 { 00:07:13.877 "name": "BaseBdev1", 00:07:13.877 "uuid": "70061f6e-e75e-4bae-8dde-832f0e4f5ca4", 00:07:13.877 "is_configured": true, 00:07:13.877 "data_offset": 2048, 00:07:13.877 "data_size": 63488 00:07:13.877 }, 00:07:13.877 { 00:07:13.877 "name": "BaseBdev2", 00:07:13.877 "uuid": "5c78e913-dd51-4a22-963a-d4d84c2b7262", 00:07:13.877 "is_configured": true, 00:07:13.877 "data_offset": 2048, 00:07:13.877 "data_size": 63488 00:07:13.877 } 00:07:13.877 ] 00:07:13.877 } 00:07:13.877 } 00:07:13.877 }' 00:07:13.877 21:39:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:14.137 BaseBdev2' 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.137 [2024-11-27 21:39:37.164660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.137 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.137 "name": "Existed_Raid", 00:07:14.137 "uuid": "89539ce0-b8fc-43db-a515-c0019dd7da62", 00:07:14.137 "strip_size_kb": 0, 00:07:14.137 "state": "online", 00:07:14.137 "raid_level": "raid1", 00:07:14.137 "superblock": true, 00:07:14.137 "num_base_bdevs": 2, 00:07:14.137 "num_base_bdevs_discovered": 1, 00:07:14.137 "num_base_bdevs_operational": 1, 00:07:14.137 "base_bdevs_list": [ 00:07:14.137 { 00:07:14.137 "name": null, 00:07:14.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.137 "is_configured": false, 00:07:14.137 "data_offset": 0, 00:07:14.137 "data_size": 63488 00:07:14.137 }, 00:07:14.137 { 00:07:14.137 "name": "BaseBdev2", 00:07:14.137 "uuid": "5c78e913-dd51-4a22-963a-d4d84c2b7262", 00:07:14.137 "is_configured": true, 00:07:14.137 "data_offset": 2048, 00:07:14.138 "data_size": 63488 00:07:14.138 } 00:07:14.138 ] 00:07:14.138 }' 00:07:14.138 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.138 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.708 [2024-11-27 21:39:37.678937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:14.708 [2024-11-27 21:39:37.679111] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:14.708 [2024-11-27 21:39:37.690619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.708 [2024-11-27 21:39:37.690736] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.708 [2024-11-27 21:39:37.690831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73968 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73968 ']' 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73968 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73968 00:07:14.708 killing process with pid 73968 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73968' 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73968 00:07:14.708 [2024-11-27 21:39:37.760371] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.708 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73968 00:07:14.708 [2024-11-27 21:39:37.761354] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.969 21:39:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:14.969 00:07:14.969 real 0m3.818s 00:07:14.969 user 0m6.060s 00:07:14.969 sys 0m0.753s 00:07:14.969 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.969 21:39:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:14.969 ************************************ 00:07:14.969 END TEST raid_state_function_test_sb 00:07:14.969 ************************************ 00:07:14.969 21:39:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:14.969 21:39:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:14.969 21:39:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.969 21:39:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.969 ************************************ 00:07:14.969 START TEST raid_superblock_test 00:07:14.969 ************************************ 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74209 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74209 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74209 ']' 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.969 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.230 [2024-11-27 21:39:38.128301] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:15.230 [2024-11-27 21:39:38.128514] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74209 ] 00:07:15.230 [2024-11-27 21:39:38.283850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.230 [2024-11-27 21:39:38.308306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.230 [2024-11-27 21:39:38.350678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.230 [2024-11-27 21:39:38.350825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.176 malloc1 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.176 [2024-11-27 21:39:38.965908] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:16.176 [2024-11-27 21:39:38.966023] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.176 [2024-11-27 21:39:38.966076] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:16.176 [2024-11-27 21:39:38.966125] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.176 [2024-11-27 21:39:38.968200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.176 [2024-11-27 21:39:38.968274] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:16.176 pt1 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:16.176 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:16.177 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:16.177 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:16.177 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:16.177 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.177 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.177 malloc2 00:07:16.177 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.177 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:16.177 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.177 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.177 [2024-11-27 21:39:38.994299] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:16.177 [2024-11-27 21:39:38.994399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.177 [2024-11-27 21:39:38.994451] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:16.177 [2024-11-27 21:39:38.994504] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.177 [2024-11-27 21:39:38.996657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.177 [2024-11-27 21:39:38.996735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:16.177 pt2 00:07:16.177 21:39:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.177 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:16.177 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:16.177 21:39:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:16.177 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.177 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.177 [2024-11-27 21:39:39.006313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:16.178 [2024-11-27 21:39:39.008263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:16.178 [2024-11-27 21:39:39.008472] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:16.178 [2024-11-27 21:39:39.008525] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:16.178 [2024-11-27 21:39:39.008880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:07:16.178 [2024-11-27 21:39:39.009104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:16.178 [2024-11-27 21:39:39.009152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:16.178 [2024-11-27 21:39:39.009392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.178 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.178 "name": "raid_bdev1", 00:07:16.178 "uuid": "24ee04ba-20ce-407c-aba9-cfb68336dcf2", 00:07:16.178 "strip_size_kb": 0, 00:07:16.178 "state": "online", 00:07:16.178 "raid_level": "raid1", 00:07:16.179 "superblock": true, 00:07:16.179 "num_base_bdevs": 2, 00:07:16.179 "num_base_bdevs_discovered": 2, 00:07:16.179 "num_base_bdevs_operational": 2, 00:07:16.179 "base_bdevs_list": [ 00:07:16.179 { 00:07:16.179 "name": "pt1", 00:07:16.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:16.179 "is_configured": true, 00:07:16.179 "data_offset": 2048, 00:07:16.179 "data_size": 63488 00:07:16.179 }, 00:07:16.179 { 00:07:16.179 "name": "pt2", 00:07:16.179 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:16.179 "is_configured": true, 00:07:16.179 "data_offset": 2048, 00:07:16.179 "data_size": 63488 00:07:16.179 } 00:07:16.179 ] 00:07:16.179 }' 00:07:16.179 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.179 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.449 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:16.449 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:16.449 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:16.449 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:16.449 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:16.449 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:16.449 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:16.449 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.449 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.449 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:16.449 [2024-11-27 21:39:39.425906] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.449 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.449 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:16.449 "name": "raid_bdev1", 00:07:16.449 "aliases": [ 00:07:16.449 "24ee04ba-20ce-407c-aba9-cfb68336dcf2" 00:07:16.449 ], 00:07:16.449 "product_name": "Raid Volume", 00:07:16.449 "block_size": 512, 00:07:16.449 "num_blocks": 63488, 00:07:16.449 "uuid": "24ee04ba-20ce-407c-aba9-cfb68336dcf2", 00:07:16.449 "assigned_rate_limits": { 00:07:16.449 "rw_ios_per_sec": 0, 00:07:16.449 "rw_mbytes_per_sec": 0, 00:07:16.449 "r_mbytes_per_sec": 0, 00:07:16.449 "w_mbytes_per_sec": 0 00:07:16.449 }, 00:07:16.449 "claimed": false, 00:07:16.449 "zoned": false, 00:07:16.449 "supported_io_types": { 00:07:16.449 "read": true, 00:07:16.449 "write": true, 00:07:16.449 "unmap": false, 00:07:16.449 "flush": false, 00:07:16.449 "reset": true, 00:07:16.449 "nvme_admin": false, 00:07:16.449 "nvme_io": false, 00:07:16.449 "nvme_io_md": false, 00:07:16.449 "write_zeroes": true, 00:07:16.449 "zcopy": false, 00:07:16.449 "get_zone_info": false, 00:07:16.449 "zone_management": false, 00:07:16.449 "zone_append": false, 00:07:16.449 "compare": false, 00:07:16.449 "compare_and_write": false, 00:07:16.449 "abort": false, 00:07:16.449 "seek_hole": false, 00:07:16.449 "seek_data": false, 00:07:16.449 "copy": false, 00:07:16.449 "nvme_iov_md": false 00:07:16.449 }, 00:07:16.449 "memory_domains": [ 00:07:16.449 { 00:07:16.449 "dma_device_id": "system", 00:07:16.449 "dma_device_type": 1 00:07:16.449 }, 00:07:16.449 { 00:07:16.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.449 "dma_device_type": 2 00:07:16.449 }, 00:07:16.449 { 00:07:16.449 "dma_device_id": "system", 00:07:16.449 "dma_device_type": 1 00:07:16.449 }, 00:07:16.449 { 00:07:16.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.449 "dma_device_type": 2 00:07:16.449 } 00:07:16.449 ], 00:07:16.449 "driver_specific": { 00:07:16.449 "raid": { 00:07:16.449 "uuid": "24ee04ba-20ce-407c-aba9-cfb68336dcf2", 00:07:16.449 "strip_size_kb": 0, 00:07:16.449 "state": "online", 00:07:16.449 "raid_level": "raid1", 00:07:16.449 "superblock": true, 00:07:16.449 "num_base_bdevs": 2, 00:07:16.449 "num_base_bdevs_discovered": 2, 00:07:16.449 "num_base_bdevs_operational": 2, 00:07:16.449 "base_bdevs_list": [ 00:07:16.449 { 00:07:16.449 "name": "pt1", 00:07:16.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:16.450 "is_configured": true, 00:07:16.450 "data_offset": 2048, 00:07:16.450 "data_size": 63488 00:07:16.450 }, 00:07:16.450 { 00:07:16.450 "name": "pt2", 00:07:16.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:16.450 "is_configured": true, 00:07:16.450 "data_offset": 2048, 00:07:16.450 "data_size": 63488 00:07:16.450 } 00:07:16.450 ] 00:07:16.450 } 00:07:16.450 } 00:07:16.450 }' 00:07:16.450 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:16.450 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:16.450 pt2' 00:07:16.450 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.450 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:16.450 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.450 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:16.450 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.450 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.450 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.450 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.753 [2024-11-27 21:39:39.649432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=24ee04ba-20ce-407c-aba9-cfb68336dcf2 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 24ee04ba-20ce-407c-aba9-cfb68336dcf2 ']' 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.753 [2024-11-27 21:39:39.693096] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:16.753 [2024-11-27 21:39:39.693123] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.753 [2024-11-27 21:39:39.693210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.753 [2024-11-27 21:39:39.693280] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.753 [2024-11-27 21:39:39.693289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:16.753 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.754 [2024-11-27 21:39:39.828894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:16.754 [2024-11-27 21:39:39.830919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:16.754 [2024-11-27 21:39:39.831016] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:16.754 [2024-11-27 21:39:39.831072] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:16.754 [2024-11-27 21:39:39.831092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:16.754 [2024-11-27 21:39:39.831103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:16.754 request: 00:07:16.754 { 00:07:16.754 "name": "raid_bdev1", 00:07:16.754 "raid_level": "raid1", 00:07:16.754 "base_bdevs": [ 00:07:16.754 "malloc1", 00:07:16.754 "malloc2" 00:07:16.754 ], 00:07:16.754 "superblock": false, 00:07:16.754 "method": "bdev_raid_create", 00:07:16.754 "req_id": 1 00:07:16.754 } 00:07:16.754 Got JSON-RPC error response 00:07:16.754 response: 00:07:16.754 { 00:07:16.754 "code": -17, 00:07:16.754 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:16.754 } 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.754 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.014 [2024-11-27 21:39:39.892750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:17.014 [2024-11-27 21:39:39.892868] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.014 [2024-11-27 21:39:39.892913] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:17.014 [2024-11-27 21:39:39.892998] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.014 [2024-11-27 21:39:39.895170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.014 [2024-11-27 21:39:39.895240] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:17.014 [2024-11-27 21:39:39.895354] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:17.014 [2024-11-27 21:39:39.895421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:17.014 pt1 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.014 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.014 "name": "raid_bdev1", 00:07:17.014 "uuid": "24ee04ba-20ce-407c-aba9-cfb68336dcf2", 00:07:17.014 "strip_size_kb": 0, 00:07:17.014 "state": "configuring", 00:07:17.014 "raid_level": "raid1", 00:07:17.015 "superblock": true, 00:07:17.015 "num_base_bdevs": 2, 00:07:17.015 "num_base_bdevs_discovered": 1, 00:07:17.015 "num_base_bdevs_operational": 2, 00:07:17.015 "base_bdevs_list": [ 00:07:17.015 { 00:07:17.015 "name": "pt1", 00:07:17.015 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:17.015 "is_configured": true, 00:07:17.015 "data_offset": 2048, 00:07:17.015 "data_size": 63488 00:07:17.015 }, 00:07:17.015 { 00:07:17.015 "name": null, 00:07:17.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:17.015 "is_configured": false, 00:07:17.015 "data_offset": 2048, 00:07:17.015 "data_size": 63488 00:07:17.015 } 00:07:17.015 ] 00:07:17.015 }' 00:07:17.015 21:39:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.015 21:39:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.275 [2024-11-27 21:39:40.320035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:17.275 [2024-11-27 21:39:40.320171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:17.275 [2024-11-27 21:39:40.320201] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:17.275 [2024-11-27 21:39:40.320211] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:17.275 [2024-11-27 21:39:40.320607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:17.275 [2024-11-27 21:39:40.320624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:17.275 [2024-11-27 21:39:40.320695] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:17.275 [2024-11-27 21:39:40.320716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:17.275 [2024-11-27 21:39:40.320822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:17.275 [2024-11-27 21:39:40.320831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:17.275 [2024-11-27 21:39:40.321137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:17.275 [2024-11-27 21:39:40.321267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:17.275 [2024-11-27 21:39:40.321282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:17.275 [2024-11-27 21:39:40.321437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.275 pt2 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.275 "name": "raid_bdev1", 00:07:17.275 "uuid": "24ee04ba-20ce-407c-aba9-cfb68336dcf2", 00:07:17.275 "strip_size_kb": 0, 00:07:17.275 "state": "online", 00:07:17.275 "raid_level": "raid1", 00:07:17.275 "superblock": true, 00:07:17.275 "num_base_bdevs": 2, 00:07:17.275 "num_base_bdevs_discovered": 2, 00:07:17.275 "num_base_bdevs_operational": 2, 00:07:17.275 "base_bdevs_list": [ 00:07:17.275 { 00:07:17.275 "name": "pt1", 00:07:17.275 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:17.275 "is_configured": true, 00:07:17.275 "data_offset": 2048, 00:07:17.275 "data_size": 63488 00:07:17.275 }, 00:07:17.275 { 00:07:17.275 "name": "pt2", 00:07:17.275 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:17.275 "is_configured": true, 00:07:17.275 "data_offset": 2048, 00:07:17.275 "data_size": 63488 00:07:17.275 } 00:07:17.275 ] 00:07:17.275 }' 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.275 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:17.846 [2024-11-27 21:39:40.759537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:17.846 "name": "raid_bdev1", 00:07:17.846 "aliases": [ 00:07:17.846 "24ee04ba-20ce-407c-aba9-cfb68336dcf2" 00:07:17.846 ], 00:07:17.846 "product_name": "Raid Volume", 00:07:17.846 "block_size": 512, 00:07:17.846 "num_blocks": 63488, 00:07:17.846 "uuid": "24ee04ba-20ce-407c-aba9-cfb68336dcf2", 00:07:17.846 "assigned_rate_limits": { 00:07:17.846 "rw_ios_per_sec": 0, 00:07:17.846 "rw_mbytes_per_sec": 0, 00:07:17.846 "r_mbytes_per_sec": 0, 00:07:17.846 "w_mbytes_per_sec": 0 00:07:17.846 }, 00:07:17.846 "claimed": false, 00:07:17.846 "zoned": false, 00:07:17.846 "supported_io_types": { 00:07:17.846 "read": true, 00:07:17.846 "write": true, 00:07:17.846 "unmap": false, 00:07:17.846 "flush": false, 00:07:17.846 "reset": true, 00:07:17.846 "nvme_admin": false, 00:07:17.846 "nvme_io": false, 00:07:17.846 "nvme_io_md": false, 00:07:17.846 "write_zeroes": true, 00:07:17.846 "zcopy": false, 00:07:17.846 "get_zone_info": false, 00:07:17.846 "zone_management": false, 00:07:17.846 "zone_append": false, 00:07:17.846 "compare": false, 00:07:17.846 "compare_and_write": false, 00:07:17.846 "abort": false, 00:07:17.846 "seek_hole": false, 00:07:17.846 "seek_data": false, 00:07:17.846 "copy": false, 00:07:17.846 "nvme_iov_md": false 00:07:17.846 }, 00:07:17.846 "memory_domains": [ 00:07:17.846 { 00:07:17.846 "dma_device_id": "system", 00:07:17.846 "dma_device_type": 1 00:07:17.846 }, 00:07:17.846 { 00:07:17.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.846 "dma_device_type": 2 00:07:17.846 }, 00:07:17.846 { 00:07:17.846 "dma_device_id": "system", 00:07:17.846 "dma_device_type": 1 00:07:17.846 }, 00:07:17.846 { 00:07:17.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.846 "dma_device_type": 2 00:07:17.846 } 00:07:17.846 ], 00:07:17.846 "driver_specific": { 00:07:17.846 "raid": { 00:07:17.846 "uuid": "24ee04ba-20ce-407c-aba9-cfb68336dcf2", 00:07:17.846 "strip_size_kb": 0, 00:07:17.846 "state": "online", 00:07:17.846 "raid_level": "raid1", 00:07:17.846 "superblock": true, 00:07:17.846 "num_base_bdevs": 2, 00:07:17.846 "num_base_bdevs_discovered": 2, 00:07:17.846 "num_base_bdevs_operational": 2, 00:07:17.846 "base_bdevs_list": [ 00:07:17.846 { 00:07:17.846 "name": "pt1", 00:07:17.846 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:17.846 "is_configured": true, 00:07:17.846 "data_offset": 2048, 00:07:17.846 "data_size": 63488 00:07:17.846 }, 00:07:17.846 { 00:07:17.846 "name": "pt2", 00:07:17.846 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:17.846 "is_configured": true, 00:07:17.846 "data_offset": 2048, 00:07:17.846 "data_size": 63488 00:07:17.846 } 00:07:17.846 ] 00:07:17.846 } 00:07:17.846 } 00:07:17.846 }' 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:17.846 pt2' 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.846 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:18.107 21:39:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.107 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:18.107 21:39:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.107 [2024-11-27 21:39:41.011114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 24ee04ba-20ce-407c-aba9-cfb68336dcf2 '!=' 24ee04ba-20ce-407c-aba9-cfb68336dcf2 ']' 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.107 [2024-11-27 21:39:41.058838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.107 "name": "raid_bdev1", 00:07:18.107 "uuid": "24ee04ba-20ce-407c-aba9-cfb68336dcf2", 00:07:18.107 "strip_size_kb": 0, 00:07:18.107 "state": "online", 00:07:18.107 "raid_level": "raid1", 00:07:18.107 "superblock": true, 00:07:18.107 "num_base_bdevs": 2, 00:07:18.107 "num_base_bdevs_discovered": 1, 00:07:18.107 "num_base_bdevs_operational": 1, 00:07:18.107 "base_bdevs_list": [ 00:07:18.107 { 00:07:18.107 "name": null, 00:07:18.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.107 "is_configured": false, 00:07:18.107 "data_offset": 0, 00:07:18.107 "data_size": 63488 00:07:18.107 }, 00:07:18.107 { 00:07:18.107 "name": "pt2", 00:07:18.107 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:18.107 "is_configured": true, 00:07:18.107 "data_offset": 2048, 00:07:18.107 "data_size": 63488 00:07:18.107 } 00:07:18.107 ] 00:07:18.107 }' 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.107 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.367 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:18.367 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.367 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.367 [2024-11-27 21:39:41.486037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:18.367 [2024-11-27 21:39:41.486128] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:18.367 [2024-11-27 21:39:41.486280] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.367 [2024-11-27 21:39:41.486381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.367 [2024-11-27 21:39:41.486460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:18.627 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.627 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:18.627 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.627 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.627 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.627 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.627 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:18.627 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:18.627 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:18.627 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.628 [2024-11-27 21:39:41.545926] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:18.628 [2024-11-27 21:39:41.545988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.628 [2024-11-27 21:39:41.546007] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:18.628 [2024-11-27 21:39:41.546016] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.628 [2024-11-27 21:39:41.548168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.628 [2024-11-27 21:39:41.548202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:18.628 [2024-11-27 21:39:41.548296] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:18.628 [2024-11-27 21:39:41.548328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:18.628 [2024-11-27 21:39:41.548405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:18.628 [2024-11-27 21:39:41.548423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:18.628 [2024-11-27 21:39:41.548691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:18.628 [2024-11-27 21:39:41.548843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:18.628 [2024-11-27 21:39:41.548856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:18.628 [2024-11-27 21:39:41.548988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.628 pt2 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.628 "name": "raid_bdev1", 00:07:18.628 "uuid": "24ee04ba-20ce-407c-aba9-cfb68336dcf2", 00:07:18.628 "strip_size_kb": 0, 00:07:18.628 "state": "online", 00:07:18.628 "raid_level": "raid1", 00:07:18.628 "superblock": true, 00:07:18.628 "num_base_bdevs": 2, 00:07:18.628 "num_base_bdevs_discovered": 1, 00:07:18.628 "num_base_bdevs_operational": 1, 00:07:18.628 "base_bdevs_list": [ 00:07:18.628 { 00:07:18.628 "name": null, 00:07:18.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.628 "is_configured": false, 00:07:18.628 "data_offset": 2048, 00:07:18.628 "data_size": 63488 00:07:18.628 }, 00:07:18.628 { 00:07:18.628 "name": "pt2", 00:07:18.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:18.628 "is_configured": true, 00:07:18.628 "data_offset": 2048, 00:07:18.628 "data_size": 63488 00:07:18.628 } 00:07:18.628 ] 00:07:18.628 }' 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.628 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.888 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:18.888 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.888 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.888 [2024-11-27 21:39:41.969257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:18.888 [2024-11-27 21:39:41.969328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:18.888 [2024-11-27 21:39:41.969459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.888 [2024-11-27 21:39:41.969557] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.888 [2024-11-27 21:39:41.969637] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:07:18.888 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.888 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:18.888 21:39:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.888 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.888 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.888 21:39:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.888 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:18.888 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.148 [2024-11-27 21:39:42.017158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:19.148 [2024-11-27 21:39:42.017277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.148 [2024-11-27 21:39:42.017315] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:07:19.148 [2024-11-27 21:39:42.017350] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.148 [2024-11-27 21:39:42.019557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.148 [2024-11-27 21:39:42.019625] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:19.148 [2024-11-27 21:39:42.019721] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:19.148 [2024-11-27 21:39:42.019784] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:19.148 [2024-11-27 21:39:42.019950] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:19.148 [2024-11-27 21:39:42.020007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:19.148 [2024-11-27 21:39:42.020046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:07:19.148 [2024-11-27 21:39:42.020151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:19.148 [2024-11-27 21:39:42.020283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:07:19.148 [2024-11-27 21:39:42.020323] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:19.148 [2024-11-27 21:39:42.020591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:07:19.148 [2024-11-27 21:39:42.020754] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:07:19.148 [2024-11-27 21:39:42.020792] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:07:19.148 [2024-11-27 21:39:42.020969] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.148 pt1 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.148 "name": "raid_bdev1", 00:07:19.148 "uuid": "24ee04ba-20ce-407c-aba9-cfb68336dcf2", 00:07:19.148 "strip_size_kb": 0, 00:07:19.148 "state": "online", 00:07:19.148 "raid_level": "raid1", 00:07:19.148 "superblock": true, 00:07:19.148 "num_base_bdevs": 2, 00:07:19.148 "num_base_bdevs_discovered": 1, 00:07:19.148 "num_base_bdevs_operational": 1, 00:07:19.148 "base_bdevs_list": [ 00:07:19.148 { 00:07:19.148 "name": null, 00:07:19.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.148 "is_configured": false, 00:07:19.148 "data_offset": 2048, 00:07:19.148 "data_size": 63488 00:07:19.148 }, 00:07:19.148 { 00:07:19.148 "name": "pt2", 00:07:19.148 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.148 "is_configured": true, 00:07:19.148 "data_offset": 2048, 00:07:19.148 "data_size": 63488 00:07:19.148 } 00:07:19.148 ] 00:07:19.148 }' 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.148 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:19.408 [2024-11-27 21:39:42.504571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 24ee04ba-20ce-407c-aba9-cfb68336dcf2 '!=' 24ee04ba-20ce-407c-aba9-cfb68336dcf2 ']' 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74209 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74209 ']' 00:07:19.408 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74209 00:07:19.668 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:19.668 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.668 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74209 00:07:19.668 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.668 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.668 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74209' 00:07:19.668 killing process with pid 74209 00:07:19.668 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74209 00:07:19.669 [2024-11-27 21:39:42.570568] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.669 [2024-11-27 21:39:42.570700] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.669 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74209 00:07:19.669 [2024-11-27 21:39:42.570784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.669 [2024-11-27 21:39:42.570794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:07:19.669 [2024-11-27 21:39:42.592972] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:19.929 21:39:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:19.929 00:07:19.929 real 0m4.758s 00:07:19.929 user 0m7.830s 00:07:19.929 sys 0m0.925s 00:07:19.929 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.929 21:39:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.929 ************************************ 00:07:19.929 END TEST raid_superblock_test 00:07:19.929 ************************************ 00:07:19.929 21:39:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:19.929 21:39:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:19.929 21:39:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.929 21:39:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:19.929 ************************************ 00:07:19.929 START TEST raid_read_error_test 00:07:19.929 ************************************ 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IP9Er5MpT1 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74517 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74517 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74517 ']' 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.929 21:39:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.929 [2024-11-27 21:39:42.974868] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:19.929 [2024-11-27 21:39:42.975059] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74517 ] 00:07:20.188 [2024-11-27 21:39:43.129636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.188 [2024-11-27 21:39:43.154026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.188 [2024-11-27 21:39:43.195281] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.188 [2024-11-27 21:39:43.195318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.759 BaseBdev1_malloc 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.759 true 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.759 [2024-11-27 21:39:43.829731] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:20.759 [2024-11-27 21:39:43.829784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.759 [2024-11-27 21:39:43.829834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:20.759 [2024-11-27 21:39:43.829843] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.759 [2024-11-27 21:39:43.831916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.759 [2024-11-27 21:39:43.831949] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:20.759 BaseBdev1 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.759 BaseBdev2_malloc 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.759 true 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.759 [2024-11-27 21:39:43.870241] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:20.759 [2024-11-27 21:39:43.870322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.759 [2024-11-27 21:39:43.870374] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:20.759 [2024-11-27 21:39:43.870413] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.759 [2024-11-27 21:39:43.872492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.759 [2024-11-27 21:39:43.872560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:20.759 BaseBdev2 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.759 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.020 [2024-11-27 21:39:43.882263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.020 [2024-11-27 21:39:43.884146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:21.020 [2024-11-27 21:39:43.884358] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:21.020 [2024-11-27 21:39:43.884373] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:21.020 [2024-11-27 21:39:43.884630] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:21.020 [2024-11-27 21:39:43.884780] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:21.020 [2024-11-27 21:39:43.884792] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:21.020 [2024-11-27 21:39:43.884940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.020 "name": "raid_bdev1", 00:07:21.020 "uuid": "e95e4114-17da-43ad-a242-27d9a6dac214", 00:07:21.020 "strip_size_kb": 0, 00:07:21.020 "state": "online", 00:07:21.020 "raid_level": "raid1", 00:07:21.020 "superblock": true, 00:07:21.020 "num_base_bdevs": 2, 00:07:21.020 "num_base_bdevs_discovered": 2, 00:07:21.020 "num_base_bdevs_operational": 2, 00:07:21.020 "base_bdevs_list": [ 00:07:21.020 { 00:07:21.020 "name": "BaseBdev1", 00:07:21.020 "uuid": "edae4708-6159-58e7-9818-30708a17fc70", 00:07:21.020 "is_configured": true, 00:07:21.020 "data_offset": 2048, 00:07:21.020 "data_size": 63488 00:07:21.020 }, 00:07:21.020 { 00:07:21.020 "name": "BaseBdev2", 00:07:21.020 "uuid": "182a7fe6-8520-5ebc-ba3c-1c350cf6c81d", 00:07:21.020 "is_configured": true, 00:07:21.020 "data_offset": 2048, 00:07:21.020 "data_size": 63488 00:07:21.020 } 00:07:21.020 ] 00:07:21.020 }' 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.020 21:39:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.280 21:39:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:21.280 21:39:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:21.540 [2024-11-27 21:39:44.409758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.482 "name": "raid_bdev1", 00:07:22.482 "uuid": "e95e4114-17da-43ad-a242-27d9a6dac214", 00:07:22.482 "strip_size_kb": 0, 00:07:22.482 "state": "online", 00:07:22.482 "raid_level": "raid1", 00:07:22.482 "superblock": true, 00:07:22.482 "num_base_bdevs": 2, 00:07:22.482 "num_base_bdevs_discovered": 2, 00:07:22.482 "num_base_bdevs_operational": 2, 00:07:22.482 "base_bdevs_list": [ 00:07:22.482 { 00:07:22.482 "name": "BaseBdev1", 00:07:22.482 "uuid": "edae4708-6159-58e7-9818-30708a17fc70", 00:07:22.482 "is_configured": true, 00:07:22.482 "data_offset": 2048, 00:07:22.482 "data_size": 63488 00:07:22.482 }, 00:07:22.482 { 00:07:22.482 "name": "BaseBdev2", 00:07:22.482 "uuid": "182a7fe6-8520-5ebc-ba3c-1c350cf6c81d", 00:07:22.482 "is_configured": true, 00:07:22.482 "data_offset": 2048, 00:07:22.482 "data_size": 63488 00:07:22.482 } 00:07:22.482 ] 00:07:22.482 }' 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.482 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.742 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:22.742 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.742 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.742 [2024-11-27 21:39:45.830027] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:22.742 [2024-11-27 21:39:45.830118] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:22.742 [2024-11-27 21:39:45.832777] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.742 [2024-11-27 21:39:45.832872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.742 [2024-11-27 21:39:45.833002] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.742 [2024-11-27 21:39:45.833056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:22.742 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.742 { 00:07:22.742 "results": [ 00:07:22.742 { 00:07:22.742 "job": "raid_bdev1", 00:07:22.742 "core_mask": "0x1", 00:07:22.742 "workload": "randrw", 00:07:22.742 "percentage": 50, 00:07:22.742 "status": "finished", 00:07:22.742 "queue_depth": 1, 00:07:22.742 "io_size": 131072, 00:07:22.742 "runtime": 1.421354, 00:07:22.742 "iops": 19247.843957240773, 00:07:22.742 "mibps": 2405.9804946550967, 00:07:22.742 "io_failed": 0, 00:07:22.742 "io_timeout": 0, 00:07:22.742 "avg_latency_us": 49.330271148424686, 00:07:22.742 "min_latency_us": 22.358078602620086, 00:07:22.742 "max_latency_us": 1423.7624454148472 00:07:22.742 } 00:07:22.742 ], 00:07:22.742 "core_count": 1 00:07:22.742 } 00:07:22.742 21:39:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74517 00:07:22.742 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74517 ']' 00:07:22.742 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74517 00:07:22.742 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:22.742 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.742 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74517 00:07:23.002 killing process with pid 74517 00:07:23.002 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.002 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.002 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74517' 00:07:23.002 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74517 00:07:23.002 [2024-11-27 21:39:45.879332] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:23.002 21:39:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74517 00:07:23.002 [2024-11-27 21:39:45.894720] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.002 21:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IP9Er5MpT1 00:07:23.002 21:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:23.002 21:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:23.002 21:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:23.002 21:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:23.002 21:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.002 21:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:23.002 21:39:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:23.002 00:07:23.002 real 0m3.230s 00:07:23.002 user 0m4.143s 00:07:23.002 sys 0m0.487s 00:07:23.002 21:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.002 ************************************ 00:07:23.002 END TEST raid_read_error_test 00:07:23.002 ************************************ 00:07:23.002 21:39:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.261 21:39:46 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:23.261 21:39:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:23.261 21:39:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.261 21:39:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.261 ************************************ 00:07:23.261 START TEST raid_write_error_test 00:07:23.261 ************************************ 00:07:23.261 21:39:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:07:23.261 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:23.261 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hEJq4LyS2r 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74657 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74657 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74657 ']' 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.262 21:39:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.262 [2024-11-27 21:39:46.278930] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:23.262 [2024-11-27 21:39:46.279130] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74657 ] 00:07:23.521 [2024-11-27 21:39:46.435265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.522 [2024-11-27 21:39:46.459277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.522 [2024-11-27 21:39:46.501115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.522 [2024-11-27 21:39:46.501146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.091 BaseBdev1_malloc 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.091 true 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.091 [2024-11-27 21:39:47.123960] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:24.091 [2024-11-27 21:39:47.124033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.091 [2024-11-27 21:39:47.124060] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:24.091 [2024-11-27 21:39:47.124070] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.091 [2024-11-27 21:39:47.126209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.091 [2024-11-27 21:39:47.126306] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:24.091 BaseBdev1 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.091 BaseBdev2_malloc 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.091 true 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.091 [2024-11-27 21:39:47.164332] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:24.091 [2024-11-27 21:39:47.164428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.091 [2024-11-27 21:39:47.164450] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:24.091 [2024-11-27 21:39:47.164467] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.091 [2024-11-27 21:39:47.166500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.091 [2024-11-27 21:39:47.166535] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:24.091 BaseBdev2 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:24.091 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.092 [2024-11-27 21:39:47.176360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.092 [2024-11-27 21:39:47.178219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.092 [2024-11-27 21:39:47.178388] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:24.092 [2024-11-27 21:39:47.178401] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:24.092 [2024-11-27 21:39:47.178652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:24.092 [2024-11-27 21:39:47.178797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:24.092 [2024-11-27 21:39:47.178821] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:24.092 [2024-11-27 21:39:47.178948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.092 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.380 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.380 "name": "raid_bdev1", 00:07:24.380 "uuid": "aba9c982-86bb-47bf-b9c9-f0ae4250bb8c", 00:07:24.380 "strip_size_kb": 0, 00:07:24.380 "state": "online", 00:07:24.380 "raid_level": "raid1", 00:07:24.380 "superblock": true, 00:07:24.380 "num_base_bdevs": 2, 00:07:24.380 "num_base_bdevs_discovered": 2, 00:07:24.380 "num_base_bdevs_operational": 2, 00:07:24.380 "base_bdevs_list": [ 00:07:24.380 { 00:07:24.380 "name": "BaseBdev1", 00:07:24.380 "uuid": "549d894f-5e41-5142-9f67-c004dcc31377", 00:07:24.380 "is_configured": true, 00:07:24.380 "data_offset": 2048, 00:07:24.380 "data_size": 63488 00:07:24.380 }, 00:07:24.380 { 00:07:24.380 "name": "BaseBdev2", 00:07:24.380 "uuid": "397c99ee-20a8-55da-b0e1-e1e735175738", 00:07:24.380 "is_configured": true, 00:07:24.380 "data_offset": 2048, 00:07:24.380 "data_size": 63488 00:07:24.381 } 00:07:24.381 ] 00:07:24.381 }' 00:07:24.381 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.381 21:39:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.646 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:24.646 21:39:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:24.646 [2024-11-27 21:39:47.691977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.583 [2024-11-27 21:39:48.608613] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:25.583 [2024-11-27 21:39:48.608745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:25.583 [2024-11-27 21:39:48.608985] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002a10 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.583 "name": "raid_bdev1", 00:07:25.583 "uuid": "aba9c982-86bb-47bf-b9c9-f0ae4250bb8c", 00:07:25.583 "strip_size_kb": 0, 00:07:25.583 "state": "online", 00:07:25.583 "raid_level": "raid1", 00:07:25.583 "superblock": true, 00:07:25.583 "num_base_bdevs": 2, 00:07:25.583 "num_base_bdevs_discovered": 1, 00:07:25.583 "num_base_bdevs_operational": 1, 00:07:25.583 "base_bdevs_list": [ 00:07:25.583 { 00:07:25.583 "name": null, 00:07:25.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.583 "is_configured": false, 00:07:25.583 "data_offset": 0, 00:07:25.583 "data_size": 63488 00:07:25.583 }, 00:07:25.583 { 00:07:25.583 "name": "BaseBdev2", 00:07:25.583 "uuid": "397c99ee-20a8-55da-b0e1-e1e735175738", 00:07:25.583 "is_configured": true, 00:07:25.583 "data_offset": 2048, 00:07:25.583 "data_size": 63488 00:07:25.583 } 00:07:25.583 ] 00:07:25.583 }' 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.583 21:39:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.150 [2024-11-27 21:39:49.054685] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:26.150 [2024-11-27 21:39:49.054723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.150 [2024-11-27 21:39:49.057294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.150 [2024-11-27 21:39:49.057349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.150 [2024-11-27 21:39:49.057413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:26.150 [2024-11-27 21:39:49.057424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:26.150 { 00:07:26.150 "results": [ 00:07:26.150 { 00:07:26.150 "job": "raid_bdev1", 00:07:26.150 "core_mask": "0x1", 00:07:26.150 "workload": "randrw", 00:07:26.150 "percentage": 50, 00:07:26.150 "status": "finished", 00:07:26.150 "queue_depth": 1, 00:07:26.150 "io_size": 131072, 00:07:26.150 "runtime": 1.363377, 00:07:26.150 "iops": 22709.786067976795, 00:07:26.150 "mibps": 2838.7232584970993, 00:07:26.150 "io_failed": 0, 00:07:26.150 "io_timeout": 0, 00:07:26.150 "avg_latency_us": 41.4235578814882, 00:07:26.150 "min_latency_us": 22.022707423580787, 00:07:26.150 "max_latency_us": 1345.0620087336245 00:07:26.150 } 00:07:26.150 ], 00:07:26.150 "core_count": 1 00:07:26.150 } 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74657 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74657 ']' 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74657 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74657 00:07:26.150 killing process with pid 74657 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74657' 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74657 00:07:26.150 [2024-11-27 21:39:49.105354] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:26.150 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74657 00:07:26.150 [2024-11-27 21:39:49.120102] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.409 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:26.409 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hEJq4LyS2r 00:07:26.409 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:26.409 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:26.409 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:26.409 ************************************ 00:07:26.409 END TEST raid_write_error_test 00:07:26.409 ************************************ 00:07:26.409 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:26.409 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:26.409 21:39:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:26.409 00:07:26.409 real 0m3.155s 00:07:26.409 user 0m4.016s 00:07:26.409 sys 0m0.487s 00:07:26.409 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.409 21:39:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.410 21:39:49 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:26.410 21:39:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:26.410 21:39:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:26.410 21:39:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:26.410 21:39:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.410 21:39:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.410 ************************************ 00:07:26.410 START TEST raid_state_function_test 00:07:26.410 ************************************ 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74784 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74784' 00:07:26.410 Process raid pid: 74784 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74784 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74784 ']' 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.410 21:39:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.410 [2024-11-27 21:39:49.496439] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:26.410 [2024-11-27 21:39:49.496642] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.669 [2024-11-27 21:39:49.650787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.669 [2024-11-27 21:39:49.676062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.669 [2024-11-27 21:39:49.719417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.669 [2024-11-27 21:39:49.719525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.238 [2024-11-27 21:39:50.330540] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.238 [2024-11-27 21:39:50.330638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.238 [2024-11-27 21:39:50.330653] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.238 [2024-11-27 21:39:50.330663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.238 [2024-11-27 21:39:50.330669] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:27.238 [2024-11-27 21:39:50.330682] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.238 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.498 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.498 "name": "Existed_Raid", 00:07:27.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.498 "strip_size_kb": 64, 00:07:27.498 "state": "configuring", 00:07:27.498 "raid_level": "raid0", 00:07:27.498 "superblock": false, 00:07:27.498 "num_base_bdevs": 3, 00:07:27.498 "num_base_bdevs_discovered": 0, 00:07:27.498 "num_base_bdevs_operational": 3, 00:07:27.498 "base_bdevs_list": [ 00:07:27.498 { 00:07:27.498 "name": "BaseBdev1", 00:07:27.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.498 "is_configured": false, 00:07:27.498 "data_offset": 0, 00:07:27.498 "data_size": 0 00:07:27.498 }, 00:07:27.498 { 00:07:27.498 "name": "BaseBdev2", 00:07:27.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.498 "is_configured": false, 00:07:27.498 "data_offset": 0, 00:07:27.498 "data_size": 0 00:07:27.498 }, 00:07:27.498 { 00:07:27.498 "name": "BaseBdev3", 00:07:27.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.498 "is_configured": false, 00:07:27.498 "data_offset": 0, 00:07:27.498 "data_size": 0 00:07:27.498 } 00:07:27.498 ] 00:07:27.498 }' 00:07:27.498 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.498 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.758 [2024-11-27 21:39:50.753728] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:27.758 [2024-11-27 21:39:50.753827] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.758 [2024-11-27 21:39:50.761732] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:27.758 [2024-11-27 21:39:50.761816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:27.758 [2024-11-27 21:39:50.761864] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:27.758 [2024-11-27 21:39:50.761897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:27.758 [2024-11-27 21:39:50.761935] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:27.758 [2024-11-27 21:39:50.761959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.758 [2024-11-27 21:39:50.778660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:27.758 BaseBdev1 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.758 [ 00:07:27.758 { 00:07:27.758 "name": "BaseBdev1", 00:07:27.758 "aliases": [ 00:07:27.758 "03a98140-30f2-461d-a76d-32529b798f48" 00:07:27.758 ], 00:07:27.758 "product_name": "Malloc disk", 00:07:27.758 "block_size": 512, 00:07:27.758 "num_blocks": 65536, 00:07:27.758 "uuid": "03a98140-30f2-461d-a76d-32529b798f48", 00:07:27.758 "assigned_rate_limits": { 00:07:27.758 "rw_ios_per_sec": 0, 00:07:27.758 "rw_mbytes_per_sec": 0, 00:07:27.758 "r_mbytes_per_sec": 0, 00:07:27.758 "w_mbytes_per_sec": 0 00:07:27.758 }, 00:07:27.758 "claimed": true, 00:07:27.758 "claim_type": "exclusive_write", 00:07:27.758 "zoned": false, 00:07:27.758 "supported_io_types": { 00:07:27.758 "read": true, 00:07:27.758 "write": true, 00:07:27.758 "unmap": true, 00:07:27.758 "flush": true, 00:07:27.758 "reset": true, 00:07:27.758 "nvme_admin": false, 00:07:27.758 "nvme_io": false, 00:07:27.758 "nvme_io_md": false, 00:07:27.758 "write_zeroes": true, 00:07:27.758 "zcopy": true, 00:07:27.758 "get_zone_info": false, 00:07:27.758 "zone_management": false, 00:07:27.758 "zone_append": false, 00:07:27.758 "compare": false, 00:07:27.758 "compare_and_write": false, 00:07:27.758 "abort": true, 00:07:27.758 "seek_hole": false, 00:07:27.758 "seek_data": false, 00:07:27.758 "copy": true, 00:07:27.758 "nvme_iov_md": false 00:07:27.758 }, 00:07:27.758 "memory_domains": [ 00:07:27.758 { 00:07:27.758 "dma_device_id": "system", 00:07:27.758 "dma_device_type": 1 00:07:27.758 }, 00:07:27.758 { 00:07:27.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.758 "dma_device_type": 2 00:07:27.758 } 00:07:27.758 ], 00:07:27.758 "driver_specific": {} 00:07:27.758 } 00:07:27.758 ] 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.758 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.758 "name": "Existed_Raid", 00:07:27.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.758 "strip_size_kb": 64, 00:07:27.758 "state": "configuring", 00:07:27.758 "raid_level": "raid0", 00:07:27.758 "superblock": false, 00:07:27.758 "num_base_bdevs": 3, 00:07:27.758 "num_base_bdevs_discovered": 1, 00:07:27.758 "num_base_bdevs_operational": 3, 00:07:27.758 "base_bdevs_list": [ 00:07:27.758 { 00:07:27.758 "name": "BaseBdev1", 00:07:27.758 "uuid": "03a98140-30f2-461d-a76d-32529b798f48", 00:07:27.759 "is_configured": true, 00:07:27.759 "data_offset": 0, 00:07:27.759 "data_size": 65536 00:07:27.759 }, 00:07:27.759 { 00:07:27.759 "name": "BaseBdev2", 00:07:27.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.759 "is_configured": false, 00:07:27.759 "data_offset": 0, 00:07:27.759 "data_size": 0 00:07:27.759 }, 00:07:27.759 { 00:07:27.759 "name": "BaseBdev3", 00:07:27.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.759 "is_configured": false, 00:07:27.759 "data_offset": 0, 00:07:27.759 "data_size": 0 00:07:27.759 } 00:07:27.759 ] 00:07:27.759 }' 00:07:27.759 21:39:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.759 21:39:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.329 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:28.329 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.329 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.329 [2024-11-27 21:39:51.190011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:28.329 [2024-11-27 21:39:51.190063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:28.329 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.330 [2024-11-27 21:39:51.202003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.330 [2024-11-27 21:39:51.203921] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:28.330 [2024-11-27 21:39:51.204006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:28.330 [2024-11-27 21:39:51.204060] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:28.330 [2024-11-27 21:39:51.204083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.330 "name": "Existed_Raid", 00:07:28.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.330 "strip_size_kb": 64, 00:07:28.330 "state": "configuring", 00:07:28.330 "raid_level": "raid0", 00:07:28.330 "superblock": false, 00:07:28.330 "num_base_bdevs": 3, 00:07:28.330 "num_base_bdevs_discovered": 1, 00:07:28.330 "num_base_bdevs_operational": 3, 00:07:28.330 "base_bdevs_list": [ 00:07:28.330 { 00:07:28.330 "name": "BaseBdev1", 00:07:28.330 "uuid": "03a98140-30f2-461d-a76d-32529b798f48", 00:07:28.330 "is_configured": true, 00:07:28.330 "data_offset": 0, 00:07:28.330 "data_size": 65536 00:07:28.330 }, 00:07:28.330 { 00:07:28.330 "name": "BaseBdev2", 00:07:28.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.330 "is_configured": false, 00:07:28.330 "data_offset": 0, 00:07:28.330 "data_size": 0 00:07:28.330 }, 00:07:28.330 { 00:07:28.330 "name": "BaseBdev3", 00:07:28.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.330 "is_configured": false, 00:07:28.330 "data_offset": 0, 00:07:28.330 "data_size": 0 00:07:28.330 } 00:07:28.330 ] 00:07:28.330 }' 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.330 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.590 [2024-11-27 21:39:51.644196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.590 BaseBdev2 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.590 [ 00:07:28.590 { 00:07:28.590 "name": "BaseBdev2", 00:07:28.590 "aliases": [ 00:07:28.590 "e120693c-f1c3-4d38-bfb3-c3b5b17defa7" 00:07:28.590 ], 00:07:28.590 "product_name": "Malloc disk", 00:07:28.590 "block_size": 512, 00:07:28.590 "num_blocks": 65536, 00:07:28.590 "uuid": "e120693c-f1c3-4d38-bfb3-c3b5b17defa7", 00:07:28.590 "assigned_rate_limits": { 00:07:28.590 "rw_ios_per_sec": 0, 00:07:28.590 "rw_mbytes_per_sec": 0, 00:07:28.590 "r_mbytes_per_sec": 0, 00:07:28.590 "w_mbytes_per_sec": 0 00:07:28.590 }, 00:07:28.590 "claimed": true, 00:07:28.590 "claim_type": "exclusive_write", 00:07:28.590 "zoned": false, 00:07:28.590 "supported_io_types": { 00:07:28.590 "read": true, 00:07:28.590 "write": true, 00:07:28.590 "unmap": true, 00:07:28.590 "flush": true, 00:07:28.590 "reset": true, 00:07:28.590 "nvme_admin": false, 00:07:28.590 "nvme_io": false, 00:07:28.590 "nvme_io_md": false, 00:07:28.590 "write_zeroes": true, 00:07:28.590 "zcopy": true, 00:07:28.590 "get_zone_info": false, 00:07:28.590 "zone_management": false, 00:07:28.590 "zone_append": false, 00:07:28.590 "compare": false, 00:07:28.590 "compare_and_write": false, 00:07:28.590 "abort": true, 00:07:28.590 "seek_hole": false, 00:07:28.590 "seek_data": false, 00:07:28.590 "copy": true, 00:07:28.590 "nvme_iov_md": false 00:07:28.590 }, 00:07:28.590 "memory_domains": [ 00:07:28.590 { 00:07:28.590 "dma_device_id": "system", 00:07:28.590 "dma_device_type": 1 00:07:28.590 }, 00:07:28.590 { 00:07:28.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.590 "dma_device_type": 2 00:07:28.590 } 00:07:28.590 ], 00:07:28.590 "driver_specific": {} 00:07:28.590 } 00:07:28.590 ] 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:28.590 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.591 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.851 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.851 "name": "Existed_Raid", 00:07:28.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.851 "strip_size_kb": 64, 00:07:28.851 "state": "configuring", 00:07:28.851 "raid_level": "raid0", 00:07:28.851 "superblock": false, 00:07:28.851 "num_base_bdevs": 3, 00:07:28.851 "num_base_bdevs_discovered": 2, 00:07:28.851 "num_base_bdevs_operational": 3, 00:07:28.851 "base_bdevs_list": [ 00:07:28.851 { 00:07:28.851 "name": "BaseBdev1", 00:07:28.851 "uuid": "03a98140-30f2-461d-a76d-32529b798f48", 00:07:28.851 "is_configured": true, 00:07:28.851 "data_offset": 0, 00:07:28.851 "data_size": 65536 00:07:28.851 }, 00:07:28.851 { 00:07:28.851 "name": "BaseBdev2", 00:07:28.851 "uuid": "e120693c-f1c3-4d38-bfb3-c3b5b17defa7", 00:07:28.851 "is_configured": true, 00:07:28.851 "data_offset": 0, 00:07:28.851 "data_size": 65536 00:07:28.851 }, 00:07:28.851 { 00:07:28.851 "name": "BaseBdev3", 00:07:28.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.851 "is_configured": false, 00:07:28.851 "data_offset": 0, 00:07:28.851 "data_size": 0 00:07:28.851 } 00:07:28.851 ] 00:07:28.851 }' 00:07:28.851 21:39:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.851 21:39:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.112 [2024-11-27 21:39:52.155761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:29.112 [2024-11-27 21:39:52.155899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:29.112 [2024-11-27 21:39:52.155939] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:29.112 [2024-11-27 21:39:52.157038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:29.112 [2024-11-27 21:39:52.157613] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:29.112 [2024-11-27 21:39:52.157673] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:29.112 [2024-11-27 21:39:52.158364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.112 BaseBdev3 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.112 [ 00:07:29.112 { 00:07:29.112 "name": "BaseBdev3", 00:07:29.112 "aliases": [ 00:07:29.112 "4a31c143-8694-420c-a753-267b7fa169c3" 00:07:29.112 ], 00:07:29.112 "product_name": "Malloc disk", 00:07:29.112 "block_size": 512, 00:07:29.112 "num_blocks": 65536, 00:07:29.112 "uuid": "4a31c143-8694-420c-a753-267b7fa169c3", 00:07:29.112 "assigned_rate_limits": { 00:07:29.112 "rw_ios_per_sec": 0, 00:07:29.112 "rw_mbytes_per_sec": 0, 00:07:29.112 "r_mbytes_per_sec": 0, 00:07:29.112 "w_mbytes_per_sec": 0 00:07:29.112 }, 00:07:29.112 "claimed": true, 00:07:29.112 "claim_type": "exclusive_write", 00:07:29.112 "zoned": false, 00:07:29.112 "supported_io_types": { 00:07:29.112 "read": true, 00:07:29.112 "write": true, 00:07:29.112 "unmap": true, 00:07:29.112 "flush": true, 00:07:29.112 "reset": true, 00:07:29.112 "nvme_admin": false, 00:07:29.112 "nvme_io": false, 00:07:29.112 "nvme_io_md": false, 00:07:29.112 "write_zeroes": true, 00:07:29.112 "zcopy": true, 00:07:29.112 "get_zone_info": false, 00:07:29.112 "zone_management": false, 00:07:29.112 "zone_append": false, 00:07:29.112 "compare": false, 00:07:29.112 "compare_and_write": false, 00:07:29.112 "abort": true, 00:07:29.112 "seek_hole": false, 00:07:29.112 "seek_data": false, 00:07:29.112 "copy": true, 00:07:29.112 "nvme_iov_md": false 00:07:29.112 }, 00:07:29.112 "memory_domains": [ 00:07:29.112 { 00:07:29.112 "dma_device_id": "system", 00:07:29.112 "dma_device_type": 1 00:07:29.112 }, 00:07:29.112 { 00:07:29.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.112 "dma_device_type": 2 00:07:29.112 } 00:07:29.112 ], 00:07:29.112 "driver_specific": {} 00:07:29.112 } 00:07:29.112 ] 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.112 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.372 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.372 "name": "Existed_Raid", 00:07:29.372 "uuid": "695740bd-7605-4105-a3d2-efda60250788", 00:07:29.372 "strip_size_kb": 64, 00:07:29.372 "state": "online", 00:07:29.372 "raid_level": "raid0", 00:07:29.372 "superblock": false, 00:07:29.372 "num_base_bdevs": 3, 00:07:29.372 "num_base_bdevs_discovered": 3, 00:07:29.372 "num_base_bdevs_operational": 3, 00:07:29.372 "base_bdevs_list": [ 00:07:29.372 { 00:07:29.372 "name": "BaseBdev1", 00:07:29.372 "uuid": "03a98140-30f2-461d-a76d-32529b798f48", 00:07:29.372 "is_configured": true, 00:07:29.372 "data_offset": 0, 00:07:29.372 "data_size": 65536 00:07:29.372 }, 00:07:29.372 { 00:07:29.372 "name": "BaseBdev2", 00:07:29.372 "uuid": "e120693c-f1c3-4d38-bfb3-c3b5b17defa7", 00:07:29.372 "is_configured": true, 00:07:29.372 "data_offset": 0, 00:07:29.372 "data_size": 65536 00:07:29.372 }, 00:07:29.372 { 00:07:29.372 "name": "BaseBdev3", 00:07:29.372 "uuid": "4a31c143-8694-420c-a753-267b7fa169c3", 00:07:29.372 "is_configured": true, 00:07:29.372 "data_offset": 0, 00:07:29.372 "data_size": 65536 00:07:29.372 } 00:07:29.372 ] 00:07:29.372 }' 00:07:29.372 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.372 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:29.632 [2024-11-27 21:39:52.655155] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:29.632 "name": "Existed_Raid", 00:07:29.632 "aliases": [ 00:07:29.632 "695740bd-7605-4105-a3d2-efda60250788" 00:07:29.632 ], 00:07:29.632 "product_name": "Raid Volume", 00:07:29.632 "block_size": 512, 00:07:29.632 "num_blocks": 196608, 00:07:29.632 "uuid": "695740bd-7605-4105-a3d2-efda60250788", 00:07:29.632 "assigned_rate_limits": { 00:07:29.632 "rw_ios_per_sec": 0, 00:07:29.632 "rw_mbytes_per_sec": 0, 00:07:29.632 "r_mbytes_per_sec": 0, 00:07:29.632 "w_mbytes_per_sec": 0 00:07:29.632 }, 00:07:29.632 "claimed": false, 00:07:29.632 "zoned": false, 00:07:29.632 "supported_io_types": { 00:07:29.632 "read": true, 00:07:29.632 "write": true, 00:07:29.632 "unmap": true, 00:07:29.632 "flush": true, 00:07:29.632 "reset": true, 00:07:29.632 "nvme_admin": false, 00:07:29.632 "nvme_io": false, 00:07:29.632 "nvme_io_md": false, 00:07:29.632 "write_zeroes": true, 00:07:29.632 "zcopy": false, 00:07:29.632 "get_zone_info": false, 00:07:29.632 "zone_management": false, 00:07:29.632 "zone_append": false, 00:07:29.632 "compare": false, 00:07:29.632 "compare_and_write": false, 00:07:29.632 "abort": false, 00:07:29.632 "seek_hole": false, 00:07:29.632 "seek_data": false, 00:07:29.632 "copy": false, 00:07:29.632 "nvme_iov_md": false 00:07:29.632 }, 00:07:29.632 "memory_domains": [ 00:07:29.632 { 00:07:29.632 "dma_device_id": "system", 00:07:29.632 "dma_device_type": 1 00:07:29.632 }, 00:07:29.632 { 00:07:29.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.632 "dma_device_type": 2 00:07:29.632 }, 00:07:29.632 { 00:07:29.632 "dma_device_id": "system", 00:07:29.632 "dma_device_type": 1 00:07:29.632 }, 00:07:29.632 { 00:07:29.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.632 "dma_device_type": 2 00:07:29.632 }, 00:07:29.632 { 00:07:29.632 "dma_device_id": "system", 00:07:29.632 "dma_device_type": 1 00:07:29.632 }, 00:07:29.632 { 00:07:29.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.632 "dma_device_type": 2 00:07:29.632 } 00:07:29.632 ], 00:07:29.632 "driver_specific": { 00:07:29.632 "raid": { 00:07:29.632 "uuid": "695740bd-7605-4105-a3d2-efda60250788", 00:07:29.632 "strip_size_kb": 64, 00:07:29.632 "state": "online", 00:07:29.632 "raid_level": "raid0", 00:07:29.632 "superblock": false, 00:07:29.632 "num_base_bdevs": 3, 00:07:29.632 "num_base_bdevs_discovered": 3, 00:07:29.632 "num_base_bdevs_operational": 3, 00:07:29.632 "base_bdevs_list": [ 00:07:29.632 { 00:07:29.632 "name": "BaseBdev1", 00:07:29.632 "uuid": "03a98140-30f2-461d-a76d-32529b798f48", 00:07:29.632 "is_configured": true, 00:07:29.632 "data_offset": 0, 00:07:29.632 "data_size": 65536 00:07:29.632 }, 00:07:29.632 { 00:07:29.632 "name": "BaseBdev2", 00:07:29.632 "uuid": "e120693c-f1c3-4d38-bfb3-c3b5b17defa7", 00:07:29.632 "is_configured": true, 00:07:29.632 "data_offset": 0, 00:07:29.632 "data_size": 65536 00:07:29.632 }, 00:07:29.632 { 00:07:29.632 "name": "BaseBdev3", 00:07:29.632 "uuid": "4a31c143-8694-420c-a753-267b7fa169c3", 00:07:29.632 "is_configured": true, 00:07:29.632 "data_offset": 0, 00:07:29.632 "data_size": 65536 00:07:29.632 } 00:07:29.632 ] 00:07:29.632 } 00:07:29.632 } 00:07:29.632 }' 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:29.632 BaseBdev2 00:07:29.632 BaseBdev3' 00:07:29.632 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.892 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:29.892 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.892 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:29.892 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.892 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.892 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.892 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.892 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.892 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.892 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.893 [2024-11-27 21:39:52.930407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:29.893 [2024-11-27 21:39:52.930433] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.893 [2024-11-27 21:39:52.930493] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.893 "name": "Existed_Raid", 00:07:29.893 "uuid": "695740bd-7605-4105-a3d2-efda60250788", 00:07:29.893 "strip_size_kb": 64, 00:07:29.893 "state": "offline", 00:07:29.893 "raid_level": "raid0", 00:07:29.893 "superblock": false, 00:07:29.893 "num_base_bdevs": 3, 00:07:29.893 "num_base_bdevs_discovered": 2, 00:07:29.893 "num_base_bdevs_operational": 2, 00:07:29.893 "base_bdevs_list": [ 00:07:29.893 { 00:07:29.893 "name": null, 00:07:29.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.893 "is_configured": false, 00:07:29.893 "data_offset": 0, 00:07:29.893 "data_size": 65536 00:07:29.893 }, 00:07:29.893 { 00:07:29.893 "name": "BaseBdev2", 00:07:29.893 "uuid": "e120693c-f1c3-4d38-bfb3-c3b5b17defa7", 00:07:29.893 "is_configured": true, 00:07:29.893 "data_offset": 0, 00:07:29.893 "data_size": 65536 00:07:29.893 }, 00:07:29.893 { 00:07:29.893 "name": "BaseBdev3", 00:07:29.893 "uuid": "4a31c143-8694-420c-a753-267b7fa169c3", 00:07:29.893 "is_configured": true, 00:07:29.893 "data_offset": 0, 00:07:29.893 "data_size": 65536 00:07:29.893 } 00:07:29.893 ] 00:07:29.893 }' 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.893 21:39:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.464 [2024-11-27 21:39:53.376892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.464 [2024-11-27 21:39:53.439947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:30.464 [2024-11-27 21:39:53.439994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.464 BaseBdev2 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.464 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.465 [ 00:07:30.465 { 00:07:30.465 "name": "BaseBdev2", 00:07:30.465 "aliases": [ 00:07:30.465 "643a034c-5af3-44be-9314-057293af255a" 00:07:30.465 ], 00:07:30.465 "product_name": "Malloc disk", 00:07:30.465 "block_size": 512, 00:07:30.465 "num_blocks": 65536, 00:07:30.465 "uuid": "643a034c-5af3-44be-9314-057293af255a", 00:07:30.465 "assigned_rate_limits": { 00:07:30.465 "rw_ios_per_sec": 0, 00:07:30.465 "rw_mbytes_per_sec": 0, 00:07:30.465 "r_mbytes_per_sec": 0, 00:07:30.465 "w_mbytes_per_sec": 0 00:07:30.465 }, 00:07:30.465 "claimed": false, 00:07:30.465 "zoned": false, 00:07:30.465 "supported_io_types": { 00:07:30.465 "read": true, 00:07:30.465 "write": true, 00:07:30.465 "unmap": true, 00:07:30.465 "flush": true, 00:07:30.465 "reset": true, 00:07:30.465 "nvme_admin": false, 00:07:30.465 "nvme_io": false, 00:07:30.465 "nvme_io_md": false, 00:07:30.465 "write_zeroes": true, 00:07:30.465 "zcopy": true, 00:07:30.465 "get_zone_info": false, 00:07:30.465 "zone_management": false, 00:07:30.465 "zone_append": false, 00:07:30.465 "compare": false, 00:07:30.465 "compare_and_write": false, 00:07:30.465 "abort": true, 00:07:30.465 "seek_hole": false, 00:07:30.465 "seek_data": false, 00:07:30.465 "copy": true, 00:07:30.465 "nvme_iov_md": false 00:07:30.465 }, 00:07:30.465 "memory_domains": [ 00:07:30.465 { 00:07:30.465 "dma_device_id": "system", 00:07:30.465 "dma_device_type": 1 00:07:30.465 }, 00:07:30.465 { 00:07:30.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.465 "dma_device_type": 2 00:07:30.465 } 00:07:30.465 ], 00:07:30.465 "driver_specific": {} 00:07:30.465 } 00:07:30.465 ] 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.465 BaseBdev3 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.465 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.726 [ 00:07:30.726 { 00:07:30.726 "name": "BaseBdev3", 00:07:30.726 "aliases": [ 00:07:30.726 "04c2b0c8-a470-4ce1-b2e2-85d3ce2506a8" 00:07:30.726 ], 00:07:30.726 "product_name": "Malloc disk", 00:07:30.726 "block_size": 512, 00:07:30.726 "num_blocks": 65536, 00:07:30.726 "uuid": "04c2b0c8-a470-4ce1-b2e2-85d3ce2506a8", 00:07:30.726 "assigned_rate_limits": { 00:07:30.726 "rw_ios_per_sec": 0, 00:07:30.726 "rw_mbytes_per_sec": 0, 00:07:30.726 "r_mbytes_per_sec": 0, 00:07:30.726 "w_mbytes_per_sec": 0 00:07:30.726 }, 00:07:30.726 "claimed": false, 00:07:30.726 "zoned": false, 00:07:30.726 "supported_io_types": { 00:07:30.726 "read": true, 00:07:30.726 "write": true, 00:07:30.726 "unmap": true, 00:07:30.726 "flush": true, 00:07:30.726 "reset": true, 00:07:30.726 "nvme_admin": false, 00:07:30.726 "nvme_io": false, 00:07:30.726 "nvme_io_md": false, 00:07:30.726 "write_zeroes": true, 00:07:30.726 "zcopy": true, 00:07:30.726 "get_zone_info": false, 00:07:30.726 "zone_management": false, 00:07:30.726 "zone_append": false, 00:07:30.726 "compare": false, 00:07:30.726 "compare_and_write": false, 00:07:30.726 "abort": true, 00:07:30.726 "seek_hole": false, 00:07:30.726 "seek_data": false, 00:07:30.726 "copy": true, 00:07:30.726 "nvme_iov_md": false 00:07:30.726 }, 00:07:30.726 "memory_domains": [ 00:07:30.726 { 00:07:30.726 "dma_device_id": "system", 00:07:30.726 "dma_device_type": 1 00:07:30.726 }, 00:07:30.726 { 00:07:30.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.726 "dma_device_type": 2 00:07:30.726 } 00:07:30.726 ], 00:07:30.726 "driver_specific": {} 00:07:30.726 } 00:07:30.726 ] 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.726 [2024-11-27 21:39:53.602420] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:30.726 [2024-11-27 21:39:53.602495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:30.726 [2024-11-27 21:39:53.602534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:30.726 [2024-11-27 21:39:53.604349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.726 "name": "Existed_Raid", 00:07:30.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.726 "strip_size_kb": 64, 00:07:30.726 "state": "configuring", 00:07:30.726 "raid_level": "raid0", 00:07:30.726 "superblock": false, 00:07:30.726 "num_base_bdevs": 3, 00:07:30.726 "num_base_bdevs_discovered": 2, 00:07:30.726 "num_base_bdevs_operational": 3, 00:07:30.726 "base_bdevs_list": [ 00:07:30.726 { 00:07:30.726 "name": "BaseBdev1", 00:07:30.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.726 "is_configured": false, 00:07:30.726 "data_offset": 0, 00:07:30.726 "data_size": 0 00:07:30.726 }, 00:07:30.726 { 00:07:30.726 "name": "BaseBdev2", 00:07:30.726 "uuid": "643a034c-5af3-44be-9314-057293af255a", 00:07:30.726 "is_configured": true, 00:07:30.726 "data_offset": 0, 00:07:30.726 "data_size": 65536 00:07:30.726 }, 00:07:30.726 { 00:07:30.726 "name": "BaseBdev3", 00:07:30.726 "uuid": "04c2b0c8-a470-4ce1-b2e2-85d3ce2506a8", 00:07:30.726 "is_configured": true, 00:07:30.726 "data_offset": 0, 00:07:30.726 "data_size": 65536 00:07:30.726 } 00:07:30.726 ] 00:07:30.726 }' 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.726 21:39:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.986 [2024-11-27 21:39:54.021714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.986 "name": "Existed_Raid", 00:07:30.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.986 "strip_size_kb": 64, 00:07:30.986 "state": "configuring", 00:07:30.986 "raid_level": "raid0", 00:07:30.986 "superblock": false, 00:07:30.986 "num_base_bdevs": 3, 00:07:30.986 "num_base_bdevs_discovered": 1, 00:07:30.986 "num_base_bdevs_operational": 3, 00:07:30.986 "base_bdevs_list": [ 00:07:30.986 { 00:07:30.986 "name": "BaseBdev1", 00:07:30.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.986 "is_configured": false, 00:07:30.986 "data_offset": 0, 00:07:30.986 "data_size": 0 00:07:30.986 }, 00:07:30.986 { 00:07:30.986 "name": null, 00:07:30.986 "uuid": "643a034c-5af3-44be-9314-057293af255a", 00:07:30.986 "is_configured": false, 00:07:30.986 "data_offset": 0, 00:07:30.986 "data_size": 65536 00:07:30.986 }, 00:07:30.986 { 00:07:30.986 "name": "BaseBdev3", 00:07:30.986 "uuid": "04c2b0c8-a470-4ce1-b2e2-85d3ce2506a8", 00:07:30.986 "is_configured": true, 00:07:30.986 "data_offset": 0, 00:07:30.986 "data_size": 65536 00:07:30.986 } 00:07:30.986 ] 00:07:30.986 }' 00:07:30.986 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.987 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.556 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.556 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:31.556 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.556 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.556 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.556 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:31.556 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:31.556 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.556 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.557 [2024-11-27 21:39:54.503722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.557 BaseBdev1 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.557 [ 00:07:31.557 { 00:07:31.557 "name": "BaseBdev1", 00:07:31.557 "aliases": [ 00:07:31.557 "192a56b7-8ef2-4dc9-87f1-80acac58942c" 00:07:31.557 ], 00:07:31.557 "product_name": "Malloc disk", 00:07:31.557 "block_size": 512, 00:07:31.557 "num_blocks": 65536, 00:07:31.557 "uuid": "192a56b7-8ef2-4dc9-87f1-80acac58942c", 00:07:31.557 "assigned_rate_limits": { 00:07:31.557 "rw_ios_per_sec": 0, 00:07:31.557 "rw_mbytes_per_sec": 0, 00:07:31.557 "r_mbytes_per_sec": 0, 00:07:31.557 "w_mbytes_per_sec": 0 00:07:31.557 }, 00:07:31.557 "claimed": true, 00:07:31.557 "claim_type": "exclusive_write", 00:07:31.557 "zoned": false, 00:07:31.557 "supported_io_types": { 00:07:31.557 "read": true, 00:07:31.557 "write": true, 00:07:31.557 "unmap": true, 00:07:31.557 "flush": true, 00:07:31.557 "reset": true, 00:07:31.557 "nvme_admin": false, 00:07:31.557 "nvme_io": false, 00:07:31.557 "nvme_io_md": false, 00:07:31.557 "write_zeroes": true, 00:07:31.557 "zcopy": true, 00:07:31.557 "get_zone_info": false, 00:07:31.557 "zone_management": false, 00:07:31.557 "zone_append": false, 00:07:31.557 "compare": false, 00:07:31.557 "compare_and_write": false, 00:07:31.557 "abort": true, 00:07:31.557 "seek_hole": false, 00:07:31.557 "seek_data": false, 00:07:31.557 "copy": true, 00:07:31.557 "nvme_iov_md": false 00:07:31.557 }, 00:07:31.557 "memory_domains": [ 00:07:31.557 { 00:07:31.557 "dma_device_id": "system", 00:07:31.557 "dma_device_type": 1 00:07:31.557 }, 00:07:31.557 { 00:07:31.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.557 "dma_device_type": 2 00:07:31.557 } 00:07:31.557 ], 00:07:31.557 "driver_specific": {} 00:07:31.557 } 00:07:31.557 ] 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.557 "name": "Existed_Raid", 00:07:31.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.557 "strip_size_kb": 64, 00:07:31.557 "state": "configuring", 00:07:31.557 "raid_level": "raid0", 00:07:31.557 "superblock": false, 00:07:31.557 "num_base_bdevs": 3, 00:07:31.557 "num_base_bdevs_discovered": 2, 00:07:31.557 "num_base_bdevs_operational": 3, 00:07:31.557 "base_bdevs_list": [ 00:07:31.557 { 00:07:31.557 "name": "BaseBdev1", 00:07:31.557 "uuid": "192a56b7-8ef2-4dc9-87f1-80acac58942c", 00:07:31.557 "is_configured": true, 00:07:31.557 "data_offset": 0, 00:07:31.557 "data_size": 65536 00:07:31.557 }, 00:07:31.557 { 00:07:31.557 "name": null, 00:07:31.557 "uuid": "643a034c-5af3-44be-9314-057293af255a", 00:07:31.557 "is_configured": false, 00:07:31.557 "data_offset": 0, 00:07:31.557 "data_size": 65536 00:07:31.557 }, 00:07:31.557 { 00:07:31.557 "name": "BaseBdev3", 00:07:31.557 "uuid": "04c2b0c8-a470-4ce1-b2e2-85d3ce2506a8", 00:07:31.557 "is_configured": true, 00:07:31.557 "data_offset": 0, 00:07:31.557 "data_size": 65536 00:07:31.557 } 00:07:31.557 ] 00:07:31.557 }' 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.557 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.127 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.127 21:39:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:32.127 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.127 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.127 21:39:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.127 [2024-11-27 21:39:55.030877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.127 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.128 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.128 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.128 "name": "Existed_Raid", 00:07:32.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.128 "strip_size_kb": 64, 00:07:32.128 "state": "configuring", 00:07:32.128 "raid_level": "raid0", 00:07:32.128 "superblock": false, 00:07:32.128 "num_base_bdevs": 3, 00:07:32.128 "num_base_bdevs_discovered": 1, 00:07:32.128 "num_base_bdevs_operational": 3, 00:07:32.128 "base_bdevs_list": [ 00:07:32.128 { 00:07:32.128 "name": "BaseBdev1", 00:07:32.128 "uuid": "192a56b7-8ef2-4dc9-87f1-80acac58942c", 00:07:32.128 "is_configured": true, 00:07:32.128 "data_offset": 0, 00:07:32.128 "data_size": 65536 00:07:32.128 }, 00:07:32.128 { 00:07:32.128 "name": null, 00:07:32.128 "uuid": "643a034c-5af3-44be-9314-057293af255a", 00:07:32.128 "is_configured": false, 00:07:32.128 "data_offset": 0, 00:07:32.128 "data_size": 65536 00:07:32.128 }, 00:07:32.128 { 00:07:32.128 "name": null, 00:07:32.128 "uuid": "04c2b0c8-a470-4ce1-b2e2-85d3ce2506a8", 00:07:32.128 "is_configured": false, 00:07:32.128 "data_offset": 0, 00:07:32.128 "data_size": 65536 00:07:32.128 } 00:07:32.128 ] 00:07:32.128 }' 00:07:32.128 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.128 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.388 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:32.388 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.388 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.388 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.388 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.388 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:32.388 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:32.388 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.388 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.388 [2024-11-27 21:39:55.506088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.648 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.648 "name": "Existed_Raid", 00:07:32.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.649 "strip_size_kb": 64, 00:07:32.649 "state": "configuring", 00:07:32.649 "raid_level": "raid0", 00:07:32.649 "superblock": false, 00:07:32.649 "num_base_bdevs": 3, 00:07:32.649 "num_base_bdevs_discovered": 2, 00:07:32.649 "num_base_bdevs_operational": 3, 00:07:32.649 "base_bdevs_list": [ 00:07:32.649 { 00:07:32.649 "name": "BaseBdev1", 00:07:32.649 "uuid": "192a56b7-8ef2-4dc9-87f1-80acac58942c", 00:07:32.649 "is_configured": true, 00:07:32.649 "data_offset": 0, 00:07:32.649 "data_size": 65536 00:07:32.649 }, 00:07:32.649 { 00:07:32.649 "name": null, 00:07:32.649 "uuid": "643a034c-5af3-44be-9314-057293af255a", 00:07:32.649 "is_configured": false, 00:07:32.649 "data_offset": 0, 00:07:32.649 "data_size": 65536 00:07:32.649 }, 00:07:32.649 { 00:07:32.649 "name": "BaseBdev3", 00:07:32.649 "uuid": "04c2b0c8-a470-4ce1-b2e2-85d3ce2506a8", 00:07:32.649 "is_configured": true, 00:07:32.649 "data_offset": 0, 00:07:32.649 "data_size": 65536 00:07:32.649 } 00:07:32.649 ] 00:07:32.649 }' 00:07:32.649 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.649 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.909 [2024-11-27 21:39:55.937375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.909 "name": "Existed_Raid", 00:07:32.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.909 "strip_size_kb": 64, 00:07:32.909 "state": "configuring", 00:07:32.909 "raid_level": "raid0", 00:07:32.909 "superblock": false, 00:07:32.909 "num_base_bdevs": 3, 00:07:32.909 "num_base_bdevs_discovered": 1, 00:07:32.909 "num_base_bdevs_operational": 3, 00:07:32.909 "base_bdevs_list": [ 00:07:32.909 { 00:07:32.909 "name": null, 00:07:32.909 "uuid": "192a56b7-8ef2-4dc9-87f1-80acac58942c", 00:07:32.909 "is_configured": false, 00:07:32.909 "data_offset": 0, 00:07:32.909 "data_size": 65536 00:07:32.909 }, 00:07:32.909 { 00:07:32.909 "name": null, 00:07:32.909 "uuid": "643a034c-5af3-44be-9314-057293af255a", 00:07:32.909 "is_configured": false, 00:07:32.909 "data_offset": 0, 00:07:32.909 "data_size": 65536 00:07:32.909 }, 00:07:32.909 { 00:07:32.909 "name": "BaseBdev3", 00:07:32.909 "uuid": "04c2b0c8-a470-4ce1-b2e2-85d3ce2506a8", 00:07:32.909 "is_configured": true, 00:07:32.909 "data_offset": 0, 00:07:32.909 "data_size": 65536 00:07:32.909 } 00:07:32.909 ] 00:07:32.909 }' 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.909 21:39:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.507 [2024-11-27 21:39:56.410895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.507 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.508 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.508 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.508 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.508 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:33.508 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.508 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.508 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.508 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.508 "name": "Existed_Raid", 00:07:33.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:33.508 "strip_size_kb": 64, 00:07:33.508 "state": "configuring", 00:07:33.508 "raid_level": "raid0", 00:07:33.508 "superblock": false, 00:07:33.508 "num_base_bdevs": 3, 00:07:33.508 "num_base_bdevs_discovered": 2, 00:07:33.508 "num_base_bdevs_operational": 3, 00:07:33.508 "base_bdevs_list": [ 00:07:33.508 { 00:07:33.508 "name": null, 00:07:33.508 "uuid": "192a56b7-8ef2-4dc9-87f1-80acac58942c", 00:07:33.508 "is_configured": false, 00:07:33.508 "data_offset": 0, 00:07:33.508 "data_size": 65536 00:07:33.508 }, 00:07:33.508 { 00:07:33.508 "name": "BaseBdev2", 00:07:33.508 "uuid": "643a034c-5af3-44be-9314-057293af255a", 00:07:33.508 "is_configured": true, 00:07:33.508 "data_offset": 0, 00:07:33.508 "data_size": 65536 00:07:33.508 }, 00:07:33.508 { 00:07:33.508 "name": "BaseBdev3", 00:07:33.508 "uuid": "04c2b0c8-a470-4ce1-b2e2-85d3ce2506a8", 00:07:33.508 "is_configured": true, 00:07:33.508 "data_offset": 0, 00:07:33.508 "data_size": 65536 00:07:33.508 } 00:07:33.508 ] 00:07:33.508 }' 00:07:33.508 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.508 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.767 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.768 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.768 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.768 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:33.768 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.768 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:33.768 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.768 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:33.768 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.768 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.027 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.027 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 192a56b7-8ef2-4dc9-87f1-80acac58942c 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.028 [2024-11-27 21:39:56.940776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:34.028 [2024-11-27 21:39:56.940824] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:34.028 [2024-11-27 21:39:56.940833] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:34.028 [2024-11-27 21:39:56.941064] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:34.028 [2024-11-27 21:39:56.941230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:34.028 [2024-11-27 21:39:56.941241] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:07:34.028 [2024-11-27 21:39:56.941422] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.028 NewBaseBdev 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.028 [ 00:07:34.028 { 00:07:34.028 "name": "NewBaseBdev", 00:07:34.028 "aliases": [ 00:07:34.028 "192a56b7-8ef2-4dc9-87f1-80acac58942c" 00:07:34.028 ], 00:07:34.028 "product_name": "Malloc disk", 00:07:34.028 "block_size": 512, 00:07:34.028 "num_blocks": 65536, 00:07:34.028 "uuid": "192a56b7-8ef2-4dc9-87f1-80acac58942c", 00:07:34.028 "assigned_rate_limits": { 00:07:34.028 "rw_ios_per_sec": 0, 00:07:34.028 "rw_mbytes_per_sec": 0, 00:07:34.028 "r_mbytes_per_sec": 0, 00:07:34.028 "w_mbytes_per_sec": 0 00:07:34.028 }, 00:07:34.028 "claimed": true, 00:07:34.028 "claim_type": "exclusive_write", 00:07:34.028 "zoned": false, 00:07:34.028 "supported_io_types": { 00:07:34.028 "read": true, 00:07:34.028 "write": true, 00:07:34.028 "unmap": true, 00:07:34.028 "flush": true, 00:07:34.028 "reset": true, 00:07:34.028 "nvme_admin": false, 00:07:34.028 "nvme_io": false, 00:07:34.028 "nvme_io_md": false, 00:07:34.028 "write_zeroes": true, 00:07:34.028 "zcopy": true, 00:07:34.028 "get_zone_info": false, 00:07:34.028 "zone_management": false, 00:07:34.028 "zone_append": false, 00:07:34.028 "compare": false, 00:07:34.028 "compare_and_write": false, 00:07:34.028 "abort": true, 00:07:34.028 "seek_hole": false, 00:07:34.028 "seek_data": false, 00:07:34.028 "copy": true, 00:07:34.028 "nvme_iov_md": false 00:07:34.028 }, 00:07:34.028 "memory_domains": [ 00:07:34.028 { 00:07:34.028 "dma_device_id": "system", 00:07:34.028 "dma_device_type": 1 00:07:34.028 }, 00:07:34.028 { 00:07:34.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.028 "dma_device_type": 2 00:07:34.028 } 00:07:34.028 ], 00:07:34.028 "driver_specific": {} 00:07:34.028 } 00:07:34.028 ] 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.028 21:39:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.028 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:34.028 "name": "Existed_Raid", 00:07:34.028 "uuid": "6fed0774-dece-4af1-8dc3-7f35e9a9c7bb", 00:07:34.028 "strip_size_kb": 64, 00:07:34.028 "state": "online", 00:07:34.028 "raid_level": "raid0", 00:07:34.028 "superblock": false, 00:07:34.028 "num_base_bdevs": 3, 00:07:34.028 "num_base_bdevs_discovered": 3, 00:07:34.028 "num_base_bdevs_operational": 3, 00:07:34.028 "base_bdevs_list": [ 00:07:34.028 { 00:07:34.028 "name": "NewBaseBdev", 00:07:34.028 "uuid": "192a56b7-8ef2-4dc9-87f1-80acac58942c", 00:07:34.028 "is_configured": true, 00:07:34.028 "data_offset": 0, 00:07:34.028 "data_size": 65536 00:07:34.028 }, 00:07:34.028 { 00:07:34.028 "name": "BaseBdev2", 00:07:34.028 "uuid": "643a034c-5af3-44be-9314-057293af255a", 00:07:34.028 "is_configured": true, 00:07:34.028 "data_offset": 0, 00:07:34.028 "data_size": 65536 00:07:34.028 }, 00:07:34.028 { 00:07:34.028 "name": "BaseBdev3", 00:07:34.028 "uuid": "04c2b0c8-a470-4ce1-b2e2-85d3ce2506a8", 00:07:34.028 "is_configured": true, 00:07:34.028 "data_offset": 0, 00:07:34.028 "data_size": 65536 00:07:34.028 } 00:07:34.028 ] 00:07:34.028 }' 00:07:34.028 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:34.028 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.287 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:34.287 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:34.287 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:34.287 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:34.287 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:34.287 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:34.288 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:34.288 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:34.288 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.288 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.288 [2024-11-27 21:39:57.408476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:34.546 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.546 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:34.546 "name": "Existed_Raid", 00:07:34.546 "aliases": [ 00:07:34.546 "6fed0774-dece-4af1-8dc3-7f35e9a9c7bb" 00:07:34.546 ], 00:07:34.546 "product_name": "Raid Volume", 00:07:34.546 "block_size": 512, 00:07:34.546 "num_blocks": 196608, 00:07:34.546 "uuid": "6fed0774-dece-4af1-8dc3-7f35e9a9c7bb", 00:07:34.546 "assigned_rate_limits": { 00:07:34.546 "rw_ios_per_sec": 0, 00:07:34.546 "rw_mbytes_per_sec": 0, 00:07:34.546 "r_mbytes_per_sec": 0, 00:07:34.546 "w_mbytes_per_sec": 0 00:07:34.546 }, 00:07:34.546 "claimed": false, 00:07:34.546 "zoned": false, 00:07:34.546 "supported_io_types": { 00:07:34.546 "read": true, 00:07:34.546 "write": true, 00:07:34.546 "unmap": true, 00:07:34.546 "flush": true, 00:07:34.546 "reset": true, 00:07:34.546 "nvme_admin": false, 00:07:34.546 "nvme_io": false, 00:07:34.546 "nvme_io_md": false, 00:07:34.546 "write_zeroes": true, 00:07:34.546 "zcopy": false, 00:07:34.546 "get_zone_info": false, 00:07:34.546 "zone_management": false, 00:07:34.546 "zone_append": false, 00:07:34.546 "compare": false, 00:07:34.546 "compare_and_write": false, 00:07:34.546 "abort": false, 00:07:34.546 "seek_hole": false, 00:07:34.546 "seek_data": false, 00:07:34.546 "copy": false, 00:07:34.546 "nvme_iov_md": false 00:07:34.546 }, 00:07:34.546 "memory_domains": [ 00:07:34.546 { 00:07:34.546 "dma_device_id": "system", 00:07:34.546 "dma_device_type": 1 00:07:34.546 }, 00:07:34.546 { 00:07:34.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.546 "dma_device_type": 2 00:07:34.546 }, 00:07:34.546 { 00:07:34.546 "dma_device_id": "system", 00:07:34.546 "dma_device_type": 1 00:07:34.546 }, 00:07:34.546 { 00:07:34.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.546 "dma_device_type": 2 00:07:34.546 }, 00:07:34.546 { 00:07:34.546 "dma_device_id": "system", 00:07:34.546 "dma_device_type": 1 00:07:34.546 }, 00:07:34.546 { 00:07:34.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.546 "dma_device_type": 2 00:07:34.546 } 00:07:34.546 ], 00:07:34.546 "driver_specific": { 00:07:34.546 "raid": { 00:07:34.546 "uuid": "6fed0774-dece-4af1-8dc3-7f35e9a9c7bb", 00:07:34.546 "strip_size_kb": 64, 00:07:34.546 "state": "online", 00:07:34.546 "raid_level": "raid0", 00:07:34.546 "superblock": false, 00:07:34.546 "num_base_bdevs": 3, 00:07:34.546 "num_base_bdevs_discovered": 3, 00:07:34.546 "num_base_bdevs_operational": 3, 00:07:34.546 "base_bdevs_list": [ 00:07:34.546 { 00:07:34.546 "name": "NewBaseBdev", 00:07:34.546 "uuid": "192a56b7-8ef2-4dc9-87f1-80acac58942c", 00:07:34.546 "is_configured": true, 00:07:34.546 "data_offset": 0, 00:07:34.546 "data_size": 65536 00:07:34.546 }, 00:07:34.546 { 00:07:34.547 "name": "BaseBdev2", 00:07:34.547 "uuid": "643a034c-5af3-44be-9314-057293af255a", 00:07:34.547 "is_configured": true, 00:07:34.547 "data_offset": 0, 00:07:34.547 "data_size": 65536 00:07:34.547 }, 00:07:34.547 { 00:07:34.547 "name": "BaseBdev3", 00:07:34.547 "uuid": "04c2b0c8-a470-4ce1-b2e2-85d3ce2506a8", 00:07:34.547 "is_configured": true, 00:07:34.547 "data_offset": 0, 00:07:34.547 "data_size": 65536 00:07:34.547 } 00:07:34.547 ] 00:07:34.547 } 00:07:34.547 } 00:07:34.547 }' 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:34.547 BaseBdev2 00:07:34.547 BaseBdev3' 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.547 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.547 [2024-11-27 21:39:57.663734] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:34.547 [2024-11-27 21:39:57.663805] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.547 [2024-11-27 21:39:57.663911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.547 [2024-11-27 21:39:57.663980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.547 [2024-11-27 21:39:57.664029] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:07:34.807 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.807 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74784 00:07:34.807 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74784 ']' 00:07:34.807 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74784 00:07:34.807 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:34.807 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.807 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74784 00:07:34.807 killing process with pid 74784 00:07:34.807 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.807 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.807 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74784' 00:07:34.807 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74784 00:07:34.807 [2024-11-27 21:39:57.708117] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.807 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74784 00:07:34.807 [2024-11-27 21:39:57.738250] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.067 21:39:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:35.067 00:07:35.067 real 0m8.544s 00:07:35.067 user 0m14.611s 00:07:35.067 sys 0m1.690s 00:07:35.067 ************************************ 00:07:35.067 END TEST raid_state_function_test 00:07:35.067 ************************************ 00:07:35.067 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.067 21:39:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.067 21:39:58 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:07:35.067 21:39:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:35.067 21:39:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.067 21:39:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.067 ************************************ 00:07:35.067 START TEST raid_state_function_test_sb 00:07:35.067 ************************************ 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:35.067 Process raid pid: 75383 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75383 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75383' 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75383 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75383 ']' 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.067 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.067 [2024-11-27 21:39:58.108232] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:35.067 [2024-11-27 21:39:58.108430] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.327 [2024-11-27 21:39:58.264644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.327 [2024-11-27 21:39:58.289316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.327 [2024-11-27 21:39:58.330977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.327 [2024-11-27 21:39:58.331008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.896 [2024-11-27 21:39:58.937284] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:35.896 [2024-11-27 21:39:58.937336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:35.896 [2024-11-27 21:39:58.937346] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:35.896 [2024-11-27 21:39:58.937356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:35.896 [2024-11-27 21:39:58.937362] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:35.896 [2024-11-27 21:39:58.937373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.896 "name": "Existed_Raid", 00:07:35.896 "uuid": "1d4535fe-89a7-47d2-8ed9-3cecd9c70f76", 00:07:35.896 "strip_size_kb": 64, 00:07:35.896 "state": "configuring", 00:07:35.896 "raid_level": "raid0", 00:07:35.896 "superblock": true, 00:07:35.896 "num_base_bdevs": 3, 00:07:35.896 "num_base_bdevs_discovered": 0, 00:07:35.896 "num_base_bdevs_operational": 3, 00:07:35.896 "base_bdevs_list": [ 00:07:35.896 { 00:07:35.896 "name": "BaseBdev1", 00:07:35.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.896 "is_configured": false, 00:07:35.896 "data_offset": 0, 00:07:35.896 "data_size": 0 00:07:35.896 }, 00:07:35.896 { 00:07:35.896 "name": "BaseBdev2", 00:07:35.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.896 "is_configured": false, 00:07:35.896 "data_offset": 0, 00:07:35.896 "data_size": 0 00:07:35.896 }, 00:07:35.896 { 00:07:35.896 "name": "BaseBdev3", 00:07:35.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.896 "is_configured": false, 00:07:35.896 "data_offset": 0, 00:07:35.896 "data_size": 0 00:07:35.896 } 00:07:35.896 ] 00:07:35.896 }' 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.896 21:39:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.465 [2024-11-27 21:39:59.388417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.465 [2024-11-27 21:39:59.388496] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.465 [2024-11-27 21:39:59.396437] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.465 [2024-11-27 21:39:59.396511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.465 [2024-11-27 21:39:59.396552] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.465 [2024-11-27 21:39:59.396588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.465 [2024-11-27 21:39:59.396617] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:36.465 [2024-11-27 21:39:59.396656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.465 [2024-11-27 21:39:59.413232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.465 BaseBdev1 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.465 [ 00:07:36.465 { 00:07:36.465 "name": "BaseBdev1", 00:07:36.465 "aliases": [ 00:07:36.465 "afa43f43-ccf6-4527-aef5-4a6ae1f32cbe" 00:07:36.465 ], 00:07:36.465 "product_name": "Malloc disk", 00:07:36.465 "block_size": 512, 00:07:36.465 "num_blocks": 65536, 00:07:36.465 "uuid": "afa43f43-ccf6-4527-aef5-4a6ae1f32cbe", 00:07:36.465 "assigned_rate_limits": { 00:07:36.465 "rw_ios_per_sec": 0, 00:07:36.465 "rw_mbytes_per_sec": 0, 00:07:36.465 "r_mbytes_per_sec": 0, 00:07:36.465 "w_mbytes_per_sec": 0 00:07:36.465 }, 00:07:36.465 "claimed": true, 00:07:36.465 "claim_type": "exclusive_write", 00:07:36.465 "zoned": false, 00:07:36.465 "supported_io_types": { 00:07:36.465 "read": true, 00:07:36.465 "write": true, 00:07:36.465 "unmap": true, 00:07:36.465 "flush": true, 00:07:36.465 "reset": true, 00:07:36.465 "nvme_admin": false, 00:07:36.465 "nvme_io": false, 00:07:36.465 "nvme_io_md": false, 00:07:36.465 "write_zeroes": true, 00:07:36.465 "zcopy": true, 00:07:36.465 "get_zone_info": false, 00:07:36.465 "zone_management": false, 00:07:36.465 "zone_append": false, 00:07:36.465 "compare": false, 00:07:36.465 "compare_and_write": false, 00:07:36.465 "abort": true, 00:07:36.465 "seek_hole": false, 00:07:36.465 "seek_data": false, 00:07:36.465 "copy": true, 00:07:36.465 "nvme_iov_md": false 00:07:36.465 }, 00:07:36.465 "memory_domains": [ 00:07:36.465 { 00:07:36.465 "dma_device_id": "system", 00:07:36.465 "dma_device_type": 1 00:07:36.465 }, 00:07:36.465 { 00:07:36.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.465 "dma_device_type": 2 00:07:36.465 } 00:07:36.465 ], 00:07:36.465 "driver_specific": {} 00:07:36.465 } 00:07:36.465 ] 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.465 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.465 "name": "Existed_Raid", 00:07:36.465 "uuid": "88eba524-6a0d-4746-a088-bb89041a1d96", 00:07:36.465 "strip_size_kb": 64, 00:07:36.465 "state": "configuring", 00:07:36.465 "raid_level": "raid0", 00:07:36.465 "superblock": true, 00:07:36.465 "num_base_bdevs": 3, 00:07:36.465 "num_base_bdevs_discovered": 1, 00:07:36.465 "num_base_bdevs_operational": 3, 00:07:36.465 "base_bdevs_list": [ 00:07:36.465 { 00:07:36.465 "name": "BaseBdev1", 00:07:36.465 "uuid": "afa43f43-ccf6-4527-aef5-4a6ae1f32cbe", 00:07:36.465 "is_configured": true, 00:07:36.465 "data_offset": 2048, 00:07:36.465 "data_size": 63488 00:07:36.465 }, 00:07:36.465 { 00:07:36.465 "name": "BaseBdev2", 00:07:36.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.465 "is_configured": false, 00:07:36.465 "data_offset": 0, 00:07:36.465 "data_size": 0 00:07:36.465 }, 00:07:36.465 { 00:07:36.465 "name": "BaseBdev3", 00:07:36.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.465 "is_configured": false, 00:07:36.465 "data_offset": 0, 00:07:36.465 "data_size": 0 00:07:36.465 } 00:07:36.466 ] 00:07:36.466 }' 00:07:36.466 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.466 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.034 [2024-11-27 21:39:59.896443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.034 [2024-11-27 21:39:59.896547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.034 [2024-11-27 21:39:59.904470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.034 [2024-11-27 21:39:59.906481] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.034 [2024-11-27 21:39:59.906553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.034 [2024-11-27 21:39:59.906610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:37.034 [2024-11-27 21:39:59.906649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.034 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.034 "name": "Existed_Raid", 00:07:37.034 "uuid": "7003269c-12a6-4051-beef-4bb085b09db7", 00:07:37.034 "strip_size_kb": 64, 00:07:37.034 "state": "configuring", 00:07:37.034 "raid_level": "raid0", 00:07:37.034 "superblock": true, 00:07:37.034 "num_base_bdevs": 3, 00:07:37.034 "num_base_bdevs_discovered": 1, 00:07:37.034 "num_base_bdevs_operational": 3, 00:07:37.034 "base_bdevs_list": [ 00:07:37.034 { 00:07:37.034 "name": "BaseBdev1", 00:07:37.035 "uuid": "afa43f43-ccf6-4527-aef5-4a6ae1f32cbe", 00:07:37.035 "is_configured": true, 00:07:37.035 "data_offset": 2048, 00:07:37.035 "data_size": 63488 00:07:37.035 }, 00:07:37.035 { 00:07:37.035 "name": "BaseBdev2", 00:07:37.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.035 "is_configured": false, 00:07:37.035 "data_offset": 0, 00:07:37.035 "data_size": 0 00:07:37.035 }, 00:07:37.035 { 00:07:37.035 "name": "BaseBdev3", 00:07:37.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.035 "is_configured": false, 00:07:37.035 "data_offset": 0, 00:07:37.035 "data_size": 0 00:07:37.035 } 00:07:37.035 ] 00:07:37.035 }' 00:07:37.035 21:39:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.035 21:39:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.295 [2024-11-27 21:40:00.346620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.295 BaseBdev2 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.295 [ 00:07:37.295 { 00:07:37.295 "name": "BaseBdev2", 00:07:37.295 "aliases": [ 00:07:37.295 "0930dc20-1f75-4ffc-b9a1-9f5171a6b967" 00:07:37.295 ], 00:07:37.295 "product_name": "Malloc disk", 00:07:37.295 "block_size": 512, 00:07:37.295 "num_blocks": 65536, 00:07:37.295 "uuid": "0930dc20-1f75-4ffc-b9a1-9f5171a6b967", 00:07:37.295 "assigned_rate_limits": { 00:07:37.295 "rw_ios_per_sec": 0, 00:07:37.295 "rw_mbytes_per_sec": 0, 00:07:37.295 "r_mbytes_per_sec": 0, 00:07:37.295 "w_mbytes_per_sec": 0 00:07:37.295 }, 00:07:37.295 "claimed": true, 00:07:37.295 "claim_type": "exclusive_write", 00:07:37.295 "zoned": false, 00:07:37.295 "supported_io_types": { 00:07:37.295 "read": true, 00:07:37.295 "write": true, 00:07:37.295 "unmap": true, 00:07:37.295 "flush": true, 00:07:37.295 "reset": true, 00:07:37.295 "nvme_admin": false, 00:07:37.295 "nvme_io": false, 00:07:37.295 "nvme_io_md": false, 00:07:37.295 "write_zeroes": true, 00:07:37.295 "zcopy": true, 00:07:37.295 "get_zone_info": false, 00:07:37.295 "zone_management": false, 00:07:37.295 "zone_append": false, 00:07:37.295 "compare": false, 00:07:37.295 "compare_and_write": false, 00:07:37.295 "abort": true, 00:07:37.295 "seek_hole": false, 00:07:37.295 "seek_data": false, 00:07:37.295 "copy": true, 00:07:37.295 "nvme_iov_md": false 00:07:37.295 }, 00:07:37.295 "memory_domains": [ 00:07:37.295 { 00:07:37.295 "dma_device_id": "system", 00:07:37.295 "dma_device_type": 1 00:07:37.295 }, 00:07:37.295 { 00:07:37.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.295 "dma_device_type": 2 00:07:37.295 } 00:07:37.295 ], 00:07:37.295 "driver_specific": {} 00:07:37.295 } 00:07:37.295 ] 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.295 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.555 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.555 "name": "Existed_Raid", 00:07:37.555 "uuid": "7003269c-12a6-4051-beef-4bb085b09db7", 00:07:37.555 "strip_size_kb": 64, 00:07:37.555 "state": "configuring", 00:07:37.555 "raid_level": "raid0", 00:07:37.555 "superblock": true, 00:07:37.555 "num_base_bdevs": 3, 00:07:37.555 "num_base_bdevs_discovered": 2, 00:07:37.555 "num_base_bdevs_operational": 3, 00:07:37.555 "base_bdevs_list": [ 00:07:37.555 { 00:07:37.555 "name": "BaseBdev1", 00:07:37.555 "uuid": "afa43f43-ccf6-4527-aef5-4a6ae1f32cbe", 00:07:37.555 "is_configured": true, 00:07:37.555 "data_offset": 2048, 00:07:37.555 "data_size": 63488 00:07:37.555 }, 00:07:37.555 { 00:07:37.555 "name": "BaseBdev2", 00:07:37.555 "uuid": "0930dc20-1f75-4ffc-b9a1-9f5171a6b967", 00:07:37.555 "is_configured": true, 00:07:37.555 "data_offset": 2048, 00:07:37.555 "data_size": 63488 00:07:37.555 }, 00:07:37.555 { 00:07:37.555 "name": "BaseBdev3", 00:07:37.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.555 "is_configured": false, 00:07:37.555 "data_offset": 0, 00:07:37.555 "data_size": 0 00:07:37.555 } 00:07:37.555 ] 00:07:37.555 }' 00:07:37.555 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.555 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.816 [2024-11-27 21:40:00.804347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:37.816 [2024-11-27 21:40:00.805159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:37.816 [2024-11-27 21:40:00.805395] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:37.816 BaseBdev3 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.816 [2024-11-27 21:40:00.806618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:37.816 [2024-11-27 21:40:00.807150] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:37.816 [2024-11-27 21:40:00.807196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:37.816 [2024-11-27 21:40:00.807614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.816 [ 00:07:37.816 { 00:07:37.816 "name": "BaseBdev3", 00:07:37.816 "aliases": [ 00:07:37.816 "292aa1ad-b8fe-4322-bb1f-3cd84cd3b53c" 00:07:37.816 ], 00:07:37.816 "product_name": "Malloc disk", 00:07:37.816 "block_size": 512, 00:07:37.816 "num_blocks": 65536, 00:07:37.816 "uuid": "292aa1ad-b8fe-4322-bb1f-3cd84cd3b53c", 00:07:37.816 "assigned_rate_limits": { 00:07:37.816 "rw_ios_per_sec": 0, 00:07:37.816 "rw_mbytes_per_sec": 0, 00:07:37.816 "r_mbytes_per_sec": 0, 00:07:37.816 "w_mbytes_per_sec": 0 00:07:37.816 }, 00:07:37.816 "claimed": true, 00:07:37.816 "claim_type": "exclusive_write", 00:07:37.816 "zoned": false, 00:07:37.816 "supported_io_types": { 00:07:37.816 "read": true, 00:07:37.816 "write": true, 00:07:37.816 "unmap": true, 00:07:37.816 "flush": true, 00:07:37.816 "reset": true, 00:07:37.816 "nvme_admin": false, 00:07:37.816 "nvme_io": false, 00:07:37.816 "nvme_io_md": false, 00:07:37.816 "write_zeroes": true, 00:07:37.816 "zcopy": true, 00:07:37.816 "get_zone_info": false, 00:07:37.816 "zone_management": false, 00:07:37.816 "zone_append": false, 00:07:37.816 "compare": false, 00:07:37.816 "compare_and_write": false, 00:07:37.816 "abort": true, 00:07:37.816 "seek_hole": false, 00:07:37.816 "seek_data": false, 00:07:37.816 "copy": true, 00:07:37.816 "nvme_iov_md": false 00:07:37.816 }, 00:07:37.816 "memory_domains": [ 00:07:37.816 { 00:07:37.816 "dma_device_id": "system", 00:07:37.816 "dma_device_type": 1 00:07:37.816 }, 00:07:37.816 { 00:07:37.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.816 "dma_device_type": 2 00:07:37.816 } 00:07:37.816 ], 00:07:37.816 "driver_specific": {} 00:07:37.816 } 00:07:37.816 ] 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.816 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.816 "name": "Existed_Raid", 00:07:37.817 "uuid": "7003269c-12a6-4051-beef-4bb085b09db7", 00:07:37.817 "strip_size_kb": 64, 00:07:37.817 "state": "online", 00:07:37.817 "raid_level": "raid0", 00:07:37.817 "superblock": true, 00:07:37.817 "num_base_bdevs": 3, 00:07:37.817 "num_base_bdevs_discovered": 3, 00:07:37.817 "num_base_bdevs_operational": 3, 00:07:37.817 "base_bdevs_list": [ 00:07:37.817 { 00:07:37.817 "name": "BaseBdev1", 00:07:37.817 "uuid": "afa43f43-ccf6-4527-aef5-4a6ae1f32cbe", 00:07:37.817 "is_configured": true, 00:07:37.817 "data_offset": 2048, 00:07:37.817 "data_size": 63488 00:07:37.817 }, 00:07:37.817 { 00:07:37.817 "name": "BaseBdev2", 00:07:37.817 "uuid": "0930dc20-1f75-4ffc-b9a1-9f5171a6b967", 00:07:37.817 "is_configured": true, 00:07:37.817 "data_offset": 2048, 00:07:37.817 "data_size": 63488 00:07:37.817 }, 00:07:37.817 { 00:07:37.817 "name": "BaseBdev3", 00:07:37.817 "uuid": "292aa1ad-b8fe-4322-bb1f-3cd84cd3b53c", 00:07:37.817 "is_configured": true, 00:07:37.817 "data_offset": 2048, 00:07:37.817 "data_size": 63488 00:07:37.817 } 00:07:37.817 ] 00:07:37.817 }' 00:07:37.817 21:40:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.817 21:40:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.386 [2024-11-27 21:40:01.311811] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.386 "name": "Existed_Raid", 00:07:38.386 "aliases": [ 00:07:38.386 "7003269c-12a6-4051-beef-4bb085b09db7" 00:07:38.386 ], 00:07:38.386 "product_name": "Raid Volume", 00:07:38.386 "block_size": 512, 00:07:38.386 "num_blocks": 190464, 00:07:38.386 "uuid": "7003269c-12a6-4051-beef-4bb085b09db7", 00:07:38.386 "assigned_rate_limits": { 00:07:38.386 "rw_ios_per_sec": 0, 00:07:38.386 "rw_mbytes_per_sec": 0, 00:07:38.386 "r_mbytes_per_sec": 0, 00:07:38.386 "w_mbytes_per_sec": 0 00:07:38.386 }, 00:07:38.386 "claimed": false, 00:07:38.386 "zoned": false, 00:07:38.386 "supported_io_types": { 00:07:38.386 "read": true, 00:07:38.386 "write": true, 00:07:38.386 "unmap": true, 00:07:38.386 "flush": true, 00:07:38.386 "reset": true, 00:07:38.386 "nvme_admin": false, 00:07:38.386 "nvme_io": false, 00:07:38.386 "nvme_io_md": false, 00:07:38.386 "write_zeroes": true, 00:07:38.386 "zcopy": false, 00:07:38.386 "get_zone_info": false, 00:07:38.386 "zone_management": false, 00:07:38.386 "zone_append": false, 00:07:38.386 "compare": false, 00:07:38.386 "compare_and_write": false, 00:07:38.386 "abort": false, 00:07:38.386 "seek_hole": false, 00:07:38.386 "seek_data": false, 00:07:38.386 "copy": false, 00:07:38.386 "nvme_iov_md": false 00:07:38.386 }, 00:07:38.386 "memory_domains": [ 00:07:38.386 { 00:07:38.386 "dma_device_id": "system", 00:07:38.386 "dma_device_type": 1 00:07:38.386 }, 00:07:38.386 { 00:07:38.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.386 "dma_device_type": 2 00:07:38.386 }, 00:07:38.386 { 00:07:38.386 "dma_device_id": "system", 00:07:38.386 "dma_device_type": 1 00:07:38.386 }, 00:07:38.386 { 00:07:38.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.386 "dma_device_type": 2 00:07:38.386 }, 00:07:38.386 { 00:07:38.386 "dma_device_id": "system", 00:07:38.386 "dma_device_type": 1 00:07:38.386 }, 00:07:38.386 { 00:07:38.386 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.386 "dma_device_type": 2 00:07:38.386 } 00:07:38.386 ], 00:07:38.386 "driver_specific": { 00:07:38.386 "raid": { 00:07:38.386 "uuid": "7003269c-12a6-4051-beef-4bb085b09db7", 00:07:38.386 "strip_size_kb": 64, 00:07:38.386 "state": "online", 00:07:38.386 "raid_level": "raid0", 00:07:38.386 "superblock": true, 00:07:38.386 "num_base_bdevs": 3, 00:07:38.386 "num_base_bdevs_discovered": 3, 00:07:38.386 "num_base_bdevs_operational": 3, 00:07:38.386 "base_bdevs_list": [ 00:07:38.386 { 00:07:38.386 "name": "BaseBdev1", 00:07:38.386 "uuid": "afa43f43-ccf6-4527-aef5-4a6ae1f32cbe", 00:07:38.386 "is_configured": true, 00:07:38.386 "data_offset": 2048, 00:07:38.386 "data_size": 63488 00:07:38.386 }, 00:07:38.386 { 00:07:38.386 "name": "BaseBdev2", 00:07:38.386 "uuid": "0930dc20-1f75-4ffc-b9a1-9f5171a6b967", 00:07:38.386 "is_configured": true, 00:07:38.386 "data_offset": 2048, 00:07:38.386 "data_size": 63488 00:07:38.386 }, 00:07:38.386 { 00:07:38.386 "name": "BaseBdev3", 00:07:38.386 "uuid": "292aa1ad-b8fe-4322-bb1f-3cd84cd3b53c", 00:07:38.386 "is_configured": true, 00:07:38.386 "data_offset": 2048, 00:07:38.386 "data_size": 63488 00:07:38.386 } 00:07:38.386 ] 00:07:38.386 } 00:07:38.386 } 00:07:38.386 }' 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:38.386 BaseBdev2 00:07:38.386 BaseBdev3' 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.386 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.387 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.646 [2024-11-27 21:40:01.543139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:38.646 [2024-11-27 21:40:01.543163] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.646 [2024-11-27 21:40:01.543214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.646 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.646 "name": "Existed_Raid", 00:07:38.646 "uuid": "7003269c-12a6-4051-beef-4bb085b09db7", 00:07:38.646 "strip_size_kb": 64, 00:07:38.646 "state": "offline", 00:07:38.646 "raid_level": "raid0", 00:07:38.646 "superblock": true, 00:07:38.646 "num_base_bdevs": 3, 00:07:38.646 "num_base_bdevs_discovered": 2, 00:07:38.646 "num_base_bdevs_operational": 2, 00:07:38.646 "base_bdevs_list": [ 00:07:38.646 { 00:07:38.646 "name": null, 00:07:38.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.647 "is_configured": false, 00:07:38.647 "data_offset": 0, 00:07:38.647 "data_size": 63488 00:07:38.647 }, 00:07:38.647 { 00:07:38.647 "name": "BaseBdev2", 00:07:38.647 "uuid": "0930dc20-1f75-4ffc-b9a1-9f5171a6b967", 00:07:38.647 "is_configured": true, 00:07:38.647 "data_offset": 2048, 00:07:38.647 "data_size": 63488 00:07:38.647 }, 00:07:38.647 { 00:07:38.647 "name": "BaseBdev3", 00:07:38.647 "uuid": "292aa1ad-b8fe-4322-bb1f-3cd84cd3b53c", 00:07:38.647 "is_configured": true, 00:07:38.647 "data_offset": 2048, 00:07:38.647 "data_size": 63488 00:07:38.647 } 00:07:38.647 ] 00:07:38.647 }' 00:07:38.647 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.647 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.906 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:38.906 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.906 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:38.906 21:40:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.906 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.906 21:40:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:38.906 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.166 [2024-11-27 21:40:02.041564] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.166 [2024-11-27 21:40:02.108558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:39.166 [2024-11-27 21:40:02.108656] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.166 BaseBdev2 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:39.166 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.167 [ 00:07:39.167 { 00:07:39.167 "name": "BaseBdev2", 00:07:39.167 "aliases": [ 00:07:39.167 "a3a611d9-51fe-4ac5-b7bb-20086138071e" 00:07:39.167 ], 00:07:39.167 "product_name": "Malloc disk", 00:07:39.167 "block_size": 512, 00:07:39.167 "num_blocks": 65536, 00:07:39.167 "uuid": "a3a611d9-51fe-4ac5-b7bb-20086138071e", 00:07:39.167 "assigned_rate_limits": { 00:07:39.167 "rw_ios_per_sec": 0, 00:07:39.167 "rw_mbytes_per_sec": 0, 00:07:39.167 "r_mbytes_per_sec": 0, 00:07:39.167 "w_mbytes_per_sec": 0 00:07:39.167 }, 00:07:39.167 "claimed": false, 00:07:39.167 "zoned": false, 00:07:39.167 "supported_io_types": { 00:07:39.167 "read": true, 00:07:39.167 "write": true, 00:07:39.167 "unmap": true, 00:07:39.167 "flush": true, 00:07:39.167 "reset": true, 00:07:39.167 "nvme_admin": false, 00:07:39.167 "nvme_io": false, 00:07:39.167 "nvme_io_md": false, 00:07:39.167 "write_zeroes": true, 00:07:39.167 "zcopy": true, 00:07:39.167 "get_zone_info": false, 00:07:39.167 "zone_management": false, 00:07:39.167 "zone_append": false, 00:07:39.167 "compare": false, 00:07:39.167 "compare_and_write": false, 00:07:39.167 "abort": true, 00:07:39.167 "seek_hole": false, 00:07:39.167 "seek_data": false, 00:07:39.167 "copy": true, 00:07:39.167 "nvme_iov_md": false 00:07:39.167 }, 00:07:39.167 "memory_domains": [ 00:07:39.167 { 00:07:39.167 "dma_device_id": "system", 00:07:39.167 "dma_device_type": 1 00:07:39.167 }, 00:07:39.167 { 00:07:39.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.167 "dma_device_type": 2 00:07:39.167 } 00:07:39.167 ], 00:07:39.167 "driver_specific": {} 00:07:39.167 } 00:07:39.167 ] 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.167 BaseBdev3 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.167 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.167 [ 00:07:39.167 { 00:07:39.167 "name": "BaseBdev3", 00:07:39.167 "aliases": [ 00:07:39.167 "acb6d682-206a-4bc3-a499-daf281e6908e" 00:07:39.167 ], 00:07:39.167 "product_name": "Malloc disk", 00:07:39.167 "block_size": 512, 00:07:39.167 "num_blocks": 65536, 00:07:39.167 "uuid": "acb6d682-206a-4bc3-a499-daf281e6908e", 00:07:39.167 "assigned_rate_limits": { 00:07:39.167 "rw_ios_per_sec": 0, 00:07:39.167 "rw_mbytes_per_sec": 0, 00:07:39.167 "r_mbytes_per_sec": 0, 00:07:39.167 "w_mbytes_per_sec": 0 00:07:39.167 }, 00:07:39.167 "claimed": false, 00:07:39.167 "zoned": false, 00:07:39.167 "supported_io_types": { 00:07:39.167 "read": true, 00:07:39.167 "write": true, 00:07:39.167 "unmap": true, 00:07:39.167 "flush": true, 00:07:39.167 "reset": true, 00:07:39.167 "nvme_admin": false, 00:07:39.167 "nvme_io": false, 00:07:39.167 "nvme_io_md": false, 00:07:39.167 "write_zeroes": true, 00:07:39.167 "zcopy": true, 00:07:39.167 "get_zone_info": false, 00:07:39.167 "zone_management": false, 00:07:39.167 "zone_append": false, 00:07:39.167 "compare": false, 00:07:39.167 "compare_and_write": false, 00:07:39.167 "abort": true, 00:07:39.167 "seek_hole": false, 00:07:39.167 "seek_data": false, 00:07:39.167 "copy": true, 00:07:39.167 "nvme_iov_md": false 00:07:39.167 }, 00:07:39.167 "memory_domains": [ 00:07:39.167 { 00:07:39.167 "dma_device_id": "system", 00:07:39.167 "dma_device_type": 1 00:07:39.167 }, 00:07:39.167 { 00:07:39.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.168 "dma_device_type": 2 00:07:39.168 } 00:07:39.168 ], 00:07:39.168 "driver_specific": {} 00:07:39.168 } 00:07:39.168 ] 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.168 [2024-11-27 21:40:02.268063] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:39.168 [2024-11-27 21:40:02.268160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:39.168 [2024-11-27 21:40:02.268202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.168 [2024-11-27 21:40:02.270014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.168 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.427 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.427 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.427 "name": "Existed_Raid", 00:07:39.427 "uuid": "1c9f3c35-c5c8-4f67-b732-fd7528bb460e", 00:07:39.427 "strip_size_kb": 64, 00:07:39.427 "state": "configuring", 00:07:39.427 "raid_level": "raid0", 00:07:39.427 "superblock": true, 00:07:39.427 "num_base_bdevs": 3, 00:07:39.427 "num_base_bdevs_discovered": 2, 00:07:39.427 "num_base_bdevs_operational": 3, 00:07:39.427 "base_bdevs_list": [ 00:07:39.427 { 00:07:39.427 "name": "BaseBdev1", 00:07:39.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.427 "is_configured": false, 00:07:39.427 "data_offset": 0, 00:07:39.427 "data_size": 0 00:07:39.427 }, 00:07:39.427 { 00:07:39.427 "name": "BaseBdev2", 00:07:39.427 "uuid": "a3a611d9-51fe-4ac5-b7bb-20086138071e", 00:07:39.427 "is_configured": true, 00:07:39.427 "data_offset": 2048, 00:07:39.427 "data_size": 63488 00:07:39.428 }, 00:07:39.428 { 00:07:39.428 "name": "BaseBdev3", 00:07:39.428 "uuid": "acb6d682-206a-4bc3-a499-daf281e6908e", 00:07:39.428 "is_configured": true, 00:07:39.428 "data_offset": 2048, 00:07:39.428 "data_size": 63488 00:07:39.428 } 00:07:39.428 ] 00:07:39.428 }' 00:07:39.428 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.428 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.687 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.688 [2024-11-27 21:40:02.647406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.688 "name": "Existed_Raid", 00:07:39.688 "uuid": "1c9f3c35-c5c8-4f67-b732-fd7528bb460e", 00:07:39.688 "strip_size_kb": 64, 00:07:39.688 "state": "configuring", 00:07:39.688 "raid_level": "raid0", 00:07:39.688 "superblock": true, 00:07:39.688 "num_base_bdevs": 3, 00:07:39.688 "num_base_bdevs_discovered": 1, 00:07:39.688 "num_base_bdevs_operational": 3, 00:07:39.688 "base_bdevs_list": [ 00:07:39.688 { 00:07:39.688 "name": "BaseBdev1", 00:07:39.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:39.688 "is_configured": false, 00:07:39.688 "data_offset": 0, 00:07:39.688 "data_size": 0 00:07:39.688 }, 00:07:39.688 { 00:07:39.688 "name": null, 00:07:39.688 "uuid": "a3a611d9-51fe-4ac5-b7bb-20086138071e", 00:07:39.688 "is_configured": false, 00:07:39.688 "data_offset": 0, 00:07:39.688 "data_size": 63488 00:07:39.688 }, 00:07:39.688 { 00:07:39.688 "name": "BaseBdev3", 00:07:39.688 "uuid": "acb6d682-206a-4bc3-a499-daf281e6908e", 00:07:39.688 "is_configured": true, 00:07:39.688 "data_offset": 2048, 00:07:39.688 "data_size": 63488 00:07:39.688 } 00:07:39.688 ] 00:07:39.688 }' 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.688 21:40:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.947 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.947 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.947 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:39.947 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:39.947 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.233 [2024-11-27 21:40:03.105481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:40.233 BaseBdev1 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.233 [ 00:07:40.233 { 00:07:40.233 "name": "BaseBdev1", 00:07:40.233 "aliases": [ 00:07:40.233 "55ef7c46-c7b5-438a-88c6-7fdb59d4ef76" 00:07:40.233 ], 00:07:40.233 "product_name": "Malloc disk", 00:07:40.233 "block_size": 512, 00:07:40.233 "num_blocks": 65536, 00:07:40.233 "uuid": "55ef7c46-c7b5-438a-88c6-7fdb59d4ef76", 00:07:40.233 "assigned_rate_limits": { 00:07:40.233 "rw_ios_per_sec": 0, 00:07:40.233 "rw_mbytes_per_sec": 0, 00:07:40.233 "r_mbytes_per_sec": 0, 00:07:40.233 "w_mbytes_per_sec": 0 00:07:40.233 }, 00:07:40.233 "claimed": true, 00:07:40.233 "claim_type": "exclusive_write", 00:07:40.233 "zoned": false, 00:07:40.233 "supported_io_types": { 00:07:40.233 "read": true, 00:07:40.233 "write": true, 00:07:40.233 "unmap": true, 00:07:40.233 "flush": true, 00:07:40.233 "reset": true, 00:07:40.233 "nvme_admin": false, 00:07:40.233 "nvme_io": false, 00:07:40.233 "nvme_io_md": false, 00:07:40.233 "write_zeroes": true, 00:07:40.233 "zcopy": true, 00:07:40.233 "get_zone_info": false, 00:07:40.233 "zone_management": false, 00:07:40.233 "zone_append": false, 00:07:40.233 "compare": false, 00:07:40.233 "compare_and_write": false, 00:07:40.233 "abort": true, 00:07:40.233 "seek_hole": false, 00:07:40.233 "seek_data": false, 00:07:40.233 "copy": true, 00:07:40.233 "nvme_iov_md": false 00:07:40.233 }, 00:07:40.233 "memory_domains": [ 00:07:40.233 { 00:07:40.233 "dma_device_id": "system", 00:07:40.233 "dma_device_type": 1 00:07:40.233 }, 00:07:40.233 { 00:07:40.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.233 "dma_device_type": 2 00:07:40.233 } 00:07:40.233 ], 00:07:40.233 "driver_specific": {} 00:07:40.233 } 00:07:40.233 ] 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.233 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.234 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.234 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.234 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.234 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.234 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.234 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.234 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.234 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.234 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.234 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.234 "name": "Existed_Raid", 00:07:40.234 "uuid": "1c9f3c35-c5c8-4f67-b732-fd7528bb460e", 00:07:40.234 "strip_size_kb": 64, 00:07:40.234 "state": "configuring", 00:07:40.234 "raid_level": "raid0", 00:07:40.234 "superblock": true, 00:07:40.234 "num_base_bdevs": 3, 00:07:40.234 "num_base_bdevs_discovered": 2, 00:07:40.234 "num_base_bdevs_operational": 3, 00:07:40.234 "base_bdevs_list": [ 00:07:40.234 { 00:07:40.234 "name": "BaseBdev1", 00:07:40.234 "uuid": "55ef7c46-c7b5-438a-88c6-7fdb59d4ef76", 00:07:40.234 "is_configured": true, 00:07:40.234 "data_offset": 2048, 00:07:40.234 "data_size": 63488 00:07:40.234 }, 00:07:40.234 { 00:07:40.234 "name": null, 00:07:40.234 "uuid": "a3a611d9-51fe-4ac5-b7bb-20086138071e", 00:07:40.234 "is_configured": false, 00:07:40.234 "data_offset": 0, 00:07:40.234 "data_size": 63488 00:07:40.234 }, 00:07:40.234 { 00:07:40.234 "name": "BaseBdev3", 00:07:40.234 "uuid": "acb6d682-206a-4bc3-a499-daf281e6908e", 00:07:40.234 "is_configured": true, 00:07:40.234 "data_offset": 2048, 00:07:40.234 "data_size": 63488 00:07:40.234 } 00:07:40.234 ] 00:07:40.234 }' 00:07:40.234 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.234 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.803 [2024-11-27 21:40:03.672614] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:40.803 "name": "Existed_Raid", 00:07:40.803 "uuid": "1c9f3c35-c5c8-4f67-b732-fd7528bb460e", 00:07:40.803 "strip_size_kb": 64, 00:07:40.803 "state": "configuring", 00:07:40.803 "raid_level": "raid0", 00:07:40.803 "superblock": true, 00:07:40.803 "num_base_bdevs": 3, 00:07:40.803 "num_base_bdevs_discovered": 1, 00:07:40.803 "num_base_bdevs_operational": 3, 00:07:40.803 "base_bdevs_list": [ 00:07:40.803 { 00:07:40.803 "name": "BaseBdev1", 00:07:40.803 "uuid": "55ef7c46-c7b5-438a-88c6-7fdb59d4ef76", 00:07:40.803 "is_configured": true, 00:07:40.803 "data_offset": 2048, 00:07:40.803 "data_size": 63488 00:07:40.803 }, 00:07:40.803 { 00:07:40.803 "name": null, 00:07:40.803 "uuid": "a3a611d9-51fe-4ac5-b7bb-20086138071e", 00:07:40.803 "is_configured": false, 00:07:40.803 "data_offset": 0, 00:07:40.803 "data_size": 63488 00:07:40.803 }, 00:07:40.803 { 00:07:40.803 "name": null, 00:07:40.803 "uuid": "acb6d682-206a-4bc3-a499-daf281e6908e", 00:07:40.803 "is_configured": false, 00:07:40.803 "data_offset": 0, 00:07:40.803 "data_size": 63488 00:07:40.803 } 00:07:40.803 ] 00:07:40.803 }' 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:40.803 21:40:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.063 [2024-11-27 21:40:04.167860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.063 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.322 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.322 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.322 "name": "Existed_Raid", 00:07:41.322 "uuid": "1c9f3c35-c5c8-4f67-b732-fd7528bb460e", 00:07:41.322 "strip_size_kb": 64, 00:07:41.322 "state": "configuring", 00:07:41.322 "raid_level": "raid0", 00:07:41.322 "superblock": true, 00:07:41.322 "num_base_bdevs": 3, 00:07:41.322 "num_base_bdevs_discovered": 2, 00:07:41.322 "num_base_bdevs_operational": 3, 00:07:41.322 "base_bdevs_list": [ 00:07:41.322 { 00:07:41.322 "name": "BaseBdev1", 00:07:41.322 "uuid": "55ef7c46-c7b5-438a-88c6-7fdb59d4ef76", 00:07:41.322 "is_configured": true, 00:07:41.322 "data_offset": 2048, 00:07:41.322 "data_size": 63488 00:07:41.322 }, 00:07:41.322 { 00:07:41.322 "name": null, 00:07:41.322 "uuid": "a3a611d9-51fe-4ac5-b7bb-20086138071e", 00:07:41.322 "is_configured": false, 00:07:41.322 "data_offset": 0, 00:07:41.322 "data_size": 63488 00:07:41.322 }, 00:07:41.322 { 00:07:41.322 "name": "BaseBdev3", 00:07:41.322 "uuid": "acb6d682-206a-4bc3-a499-daf281e6908e", 00:07:41.322 "is_configured": true, 00:07:41.322 "data_offset": 2048, 00:07:41.322 "data_size": 63488 00:07:41.322 } 00:07:41.322 ] 00:07:41.322 }' 00:07:41.322 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.322 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.582 [2024-11-27 21:40:04.635060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.582 "name": "Existed_Raid", 00:07:41.582 "uuid": "1c9f3c35-c5c8-4f67-b732-fd7528bb460e", 00:07:41.582 "strip_size_kb": 64, 00:07:41.582 "state": "configuring", 00:07:41.582 "raid_level": "raid0", 00:07:41.582 "superblock": true, 00:07:41.582 "num_base_bdevs": 3, 00:07:41.582 "num_base_bdevs_discovered": 1, 00:07:41.582 "num_base_bdevs_operational": 3, 00:07:41.582 "base_bdevs_list": [ 00:07:41.582 { 00:07:41.582 "name": null, 00:07:41.582 "uuid": "55ef7c46-c7b5-438a-88c6-7fdb59d4ef76", 00:07:41.582 "is_configured": false, 00:07:41.582 "data_offset": 0, 00:07:41.582 "data_size": 63488 00:07:41.582 }, 00:07:41.582 { 00:07:41.582 "name": null, 00:07:41.582 "uuid": "a3a611d9-51fe-4ac5-b7bb-20086138071e", 00:07:41.582 "is_configured": false, 00:07:41.582 "data_offset": 0, 00:07:41.582 "data_size": 63488 00:07:41.582 }, 00:07:41.582 { 00:07:41.582 "name": "BaseBdev3", 00:07:41.582 "uuid": "acb6d682-206a-4bc3-a499-daf281e6908e", 00:07:41.582 "is_configured": true, 00:07:41.582 "data_offset": 2048, 00:07:41.582 "data_size": 63488 00:07:41.582 } 00:07:41.582 ] 00:07:41.582 }' 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.582 21:40:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.152 [2024-11-27 21:40:05.092663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.152 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.152 "name": "Existed_Raid", 00:07:42.152 "uuid": "1c9f3c35-c5c8-4f67-b732-fd7528bb460e", 00:07:42.152 "strip_size_kb": 64, 00:07:42.152 "state": "configuring", 00:07:42.152 "raid_level": "raid0", 00:07:42.152 "superblock": true, 00:07:42.152 "num_base_bdevs": 3, 00:07:42.152 "num_base_bdevs_discovered": 2, 00:07:42.152 "num_base_bdevs_operational": 3, 00:07:42.152 "base_bdevs_list": [ 00:07:42.152 { 00:07:42.152 "name": null, 00:07:42.152 "uuid": "55ef7c46-c7b5-438a-88c6-7fdb59d4ef76", 00:07:42.152 "is_configured": false, 00:07:42.152 "data_offset": 0, 00:07:42.152 "data_size": 63488 00:07:42.152 }, 00:07:42.152 { 00:07:42.152 "name": "BaseBdev2", 00:07:42.152 "uuid": "a3a611d9-51fe-4ac5-b7bb-20086138071e", 00:07:42.152 "is_configured": true, 00:07:42.152 "data_offset": 2048, 00:07:42.152 "data_size": 63488 00:07:42.152 }, 00:07:42.152 { 00:07:42.152 "name": "BaseBdev3", 00:07:42.152 "uuid": "acb6d682-206a-4bc3-a499-daf281e6908e", 00:07:42.152 "is_configured": true, 00:07:42.152 "data_offset": 2048, 00:07:42.152 "data_size": 63488 00:07:42.153 } 00:07:42.153 ] 00:07:42.153 }' 00:07:42.153 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.153 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.411 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.411 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.411 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.411 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:42.411 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.670 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:07:42.670 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 55ef7c46-c7b5-438a-88c6-7fdb59d4ef76 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.671 [2024-11-27 21:40:05.602653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:07:42.671 [2024-11-27 21:40:05.602831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:42.671 [2024-11-27 21:40:05.602849] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:42.671 [2024-11-27 21:40:05.603090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:07:42.671 NewBaseBdev 00:07:42.671 [2024-11-27 21:40:05.603205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:42.671 [2024-11-27 21:40:05.603215] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:07:42.671 [2024-11-27 21:40:05.603363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.671 [ 00:07:42.671 { 00:07:42.671 "name": "NewBaseBdev", 00:07:42.671 "aliases": [ 00:07:42.671 "55ef7c46-c7b5-438a-88c6-7fdb59d4ef76" 00:07:42.671 ], 00:07:42.671 "product_name": "Malloc disk", 00:07:42.671 "block_size": 512, 00:07:42.671 "num_blocks": 65536, 00:07:42.671 "uuid": "55ef7c46-c7b5-438a-88c6-7fdb59d4ef76", 00:07:42.671 "assigned_rate_limits": { 00:07:42.671 "rw_ios_per_sec": 0, 00:07:42.671 "rw_mbytes_per_sec": 0, 00:07:42.671 "r_mbytes_per_sec": 0, 00:07:42.671 "w_mbytes_per_sec": 0 00:07:42.671 }, 00:07:42.671 "claimed": true, 00:07:42.671 "claim_type": "exclusive_write", 00:07:42.671 "zoned": false, 00:07:42.671 "supported_io_types": { 00:07:42.671 "read": true, 00:07:42.671 "write": true, 00:07:42.671 "unmap": true, 00:07:42.671 "flush": true, 00:07:42.671 "reset": true, 00:07:42.671 "nvme_admin": false, 00:07:42.671 "nvme_io": false, 00:07:42.671 "nvme_io_md": false, 00:07:42.671 "write_zeroes": true, 00:07:42.671 "zcopy": true, 00:07:42.671 "get_zone_info": false, 00:07:42.671 "zone_management": false, 00:07:42.671 "zone_append": false, 00:07:42.671 "compare": false, 00:07:42.671 "compare_and_write": false, 00:07:42.671 "abort": true, 00:07:42.671 "seek_hole": false, 00:07:42.671 "seek_data": false, 00:07:42.671 "copy": true, 00:07:42.671 "nvme_iov_md": false 00:07:42.671 }, 00:07:42.671 "memory_domains": [ 00:07:42.671 { 00:07:42.671 "dma_device_id": "system", 00:07:42.671 "dma_device_type": 1 00:07:42.671 }, 00:07:42.671 { 00:07:42.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.671 "dma_device_type": 2 00:07:42.671 } 00:07:42.671 ], 00:07:42.671 "driver_specific": {} 00:07:42.671 } 00:07:42.671 ] 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.671 "name": "Existed_Raid", 00:07:42.671 "uuid": "1c9f3c35-c5c8-4f67-b732-fd7528bb460e", 00:07:42.671 "strip_size_kb": 64, 00:07:42.671 "state": "online", 00:07:42.671 "raid_level": "raid0", 00:07:42.671 "superblock": true, 00:07:42.671 "num_base_bdevs": 3, 00:07:42.671 "num_base_bdevs_discovered": 3, 00:07:42.671 "num_base_bdevs_operational": 3, 00:07:42.671 "base_bdevs_list": [ 00:07:42.671 { 00:07:42.671 "name": "NewBaseBdev", 00:07:42.671 "uuid": "55ef7c46-c7b5-438a-88c6-7fdb59d4ef76", 00:07:42.671 "is_configured": true, 00:07:42.671 "data_offset": 2048, 00:07:42.671 "data_size": 63488 00:07:42.671 }, 00:07:42.671 { 00:07:42.671 "name": "BaseBdev2", 00:07:42.671 "uuid": "a3a611d9-51fe-4ac5-b7bb-20086138071e", 00:07:42.671 "is_configured": true, 00:07:42.671 "data_offset": 2048, 00:07:42.671 "data_size": 63488 00:07:42.671 }, 00:07:42.671 { 00:07:42.671 "name": "BaseBdev3", 00:07:42.671 "uuid": "acb6d682-206a-4bc3-a499-daf281e6908e", 00:07:42.671 "is_configured": true, 00:07:42.671 "data_offset": 2048, 00:07:42.671 "data_size": 63488 00:07:42.671 } 00:07:42.671 ] 00:07:42.671 }' 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.671 21:40:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.930 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:07:42.930 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:42.930 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:42.930 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:42.930 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:42.930 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:42.930 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:42.930 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:42.930 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.930 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.930 [2024-11-27 21:40:06.050234] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.189 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.189 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.189 "name": "Existed_Raid", 00:07:43.189 "aliases": [ 00:07:43.189 "1c9f3c35-c5c8-4f67-b732-fd7528bb460e" 00:07:43.189 ], 00:07:43.189 "product_name": "Raid Volume", 00:07:43.189 "block_size": 512, 00:07:43.189 "num_blocks": 190464, 00:07:43.189 "uuid": "1c9f3c35-c5c8-4f67-b732-fd7528bb460e", 00:07:43.189 "assigned_rate_limits": { 00:07:43.189 "rw_ios_per_sec": 0, 00:07:43.189 "rw_mbytes_per_sec": 0, 00:07:43.189 "r_mbytes_per_sec": 0, 00:07:43.189 "w_mbytes_per_sec": 0 00:07:43.189 }, 00:07:43.189 "claimed": false, 00:07:43.189 "zoned": false, 00:07:43.189 "supported_io_types": { 00:07:43.189 "read": true, 00:07:43.189 "write": true, 00:07:43.189 "unmap": true, 00:07:43.189 "flush": true, 00:07:43.189 "reset": true, 00:07:43.189 "nvme_admin": false, 00:07:43.189 "nvme_io": false, 00:07:43.189 "nvme_io_md": false, 00:07:43.189 "write_zeroes": true, 00:07:43.189 "zcopy": false, 00:07:43.189 "get_zone_info": false, 00:07:43.189 "zone_management": false, 00:07:43.189 "zone_append": false, 00:07:43.189 "compare": false, 00:07:43.189 "compare_and_write": false, 00:07:43.189 "abort": false, 00:07:43.189 "seek_hole": false, 00:07:43.189 "seek_data": false, 00:07:43.189 "copy": false, 00:07:43.189 "nvme_iov_md": false 00:07:43.189 }, 00:07:43.189 "memory_domains": [ 00:07:43.189 { 00:07:43.189 "dma_device_id": "system", 00:07:43.189 "dma_device_type": 1 00:07:43.189 }, 00:07:43.189 { 00:07:43.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.189 "dma_device_type": 2 00:07:43.189 }, 00:07:43.189 { 00:07:43.189 "dma_device_id": "system", 00:07:43.189 "dma_device_type": 1 00:07:43.189 }, 00:07:43.189 { 00:07:43.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.189 "dma_device_type": 2 00:07:43.189 }, 00:07:43.189 { 00:07:43.189 "dma_device_id": "system", 00:07:43.189 "dma_device_type": 1 00:07:43.189 }, 00:07:43.189 { 00:07:43.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.189 "dma_device_type": 2 00:07:43.189 } 00:07:43.189 ], 00:07:43.189 "driver_specific": { 00:07:43.189 "raid": { 00:07:43.189 "uuid": "1c9f3c35-c5c8-4f67-b732-fd7528bb460e", 00:07:43.189 "strip_size_kb": 64, 00:07:43.189 "state": "online", 00:07:43.189 "raid_level": "raid0", 00:07:43.189 "superblock": true, 00:07:43.189 "num_base_bdevs": 3, 00:07:43.189 "num_base_bdevs_discovered": 3, 00:07:43.189 "num_base_bdevs_operational": 3, 00:07:43.189 "base_bdevs_list": [ 00:07:43.189 { 00:07:43.189 "name": "NewBaseBdev", 00:07:43.190 "uuid": "55ef7c46-c7b5-438a-88c6-7fdb59d4ef76", 00:07:43.190 "is_configured": true, 00:07:43.190 "data_offset": 2048, 00:07:43.190 "data_size": 63488 00:07:43.190 }, 00:07:43.190 { 00:07:43.190 "name": "BaseBdev2", 00:07:43.190 "uuid": "a3a611d9-51fe-4ac5-b7bb-20086138071e", 00:07:43.190 "is_configured": true, 00:07:43.190 "data_offset": 2048, 00:07:43.190 "data_size": 63488 00:07:43.190 }, 00:07:43.190 { 00:07:43.190 "name": "BaseBdev3", 00:07:43.190 "uuid": "acb6d682-206a-4bc3-a499-daf281e6908e", 00:07:43.190 "is_configured": true, 00:07:43.190 "data_offset": 2048, 00:07:43.190 "data_size": 63488 00:07:43.190 } 00:07:43.190 ] 00:07:43.190 } 00:07:43.190 } 00:07:43.190 }' 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:07:43.190 BaseBdev2 00:07:43.190 BaseBdev3' 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.190 [2024-11-27 21:40:06.293514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:43.190 [2024-11-27 21:40:06.293540] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.190 [2024-11-27 21:40:06.293604] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.190 [2024-11-27 21:40:06.293654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.190 [2024-11-27 21:40:06.293665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75383 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75383 ']' 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 75383 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.190 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75383 00:07:43.450 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.450 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.450 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75383' 00:07:43.450 killing process with pid 75383 00:07:43.450 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 75383 00:07:43.450 [2024-11-27 21:40:06.329233] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.450 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 75383 00:07:43.451 [2024-11-27 21:40:06.359648] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.451 21:40:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:43.451 00:07:43.451 real 0m8.552s 00:07:43.451 user 0m14.698s 00:07:43.451 sys 0m1.652s 00:07:43.451 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.710 21:40:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.710 ************************************ 00:07:43.710 END TEST raid_state_function_test_sb 00:07:43.710 ************************************ 00:07:43.710 21:40:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:07:43.710 21:40:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:43.710 21:40:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.710 21:40:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.710 ************************************ 00:07:43.710 START TEST raid_superblock_test 00:07:43.710 ************************************ 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75981 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75981 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75981 ']' 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.710 21:40:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.710 [2024-11-27 21:40:06.732587] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:43.711 [2024-11-27 21:40:06.732827] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75981 ] 00:07:43.970 [2024-11-27 21:40:06.888582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.970 [2024-11-27 21:40:06.913489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.970 [2024-11-27 21:40:06.955336] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.970 [2024-11-27 21:40:06.955444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.569 malloc1 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.569 [2024-11-27 21:40:07.570117] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:44.569 [2024-11-27 21:40:07.570175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.569 [2024-11-27 21:40:07.570193] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:07:44.569 [2024-11-27 21:40:07.570206] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.569 [2024-11-27 21:40:07.572305] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.569 [2024-11-27 21:40:07.572341] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:44.569 pt1 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.569 malloc2 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.569 [2024-11-27 21:40:07.598468] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:44.569 [2024-11-27 21:40:07.598572] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.569 [2024-11-27 21:40:07.598607] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:44.569 [2024-11-27 21:40:07.598636] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.569 [2024-11-27 21:40:07.600701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.569 [2024-11-27 21:40:07.600770] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:44.569 pt2 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:44.569 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.570 malloc3 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.570 [2024-11-27 21:40:07.630954] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:44.570 [2024-11-27 21:40:07.631045] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.570 [2024-11-27 21:40:07.631079] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:44.570 [2024-11-27 21:40:07.631107] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.570 [2024-11-27 21:40:07.633326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.570 [2024-11-27 21:40:07.633403] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:44.570 pt3 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.570 [2024-11-27 21:40:07.642995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:44.570 [2024-11-27 21:40:07.644870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:44.570 [2024-11-27 21:40:07.644927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:44.570 [2024-11-27 21:40:07.645072] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:07:44.570 [2024-11-27 21:40:07.645084] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:44.570 [2024-11-27 21:40:07.645353] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:44.570 [2024-11-27 21:40:07.645498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:07:44.570 [2024-11-27 21:40:07.645511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:07:44.570 [2024-11-27 21:40:07.645643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.570 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.830 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.830 "name": "raid_bdev1", 00:07:44.830 "uuid": "27a7d243-7640-449f-ab8c-cf23e3481f03", 00:07:44.830 "strip_size_kb": 64, 00:07:44.830 "state": "online", 00:07:44.830 "raid_level": "raid0", 00:07:44.830 "superblock": true, 00:07:44.830 "num_base_bdevs": 3, 00:07:44.830 "num_base_bdevs_discovered": 3, 00:07:44.830 "num_base_bdevs_operational": 3, 00:07:44.830 "base_bdevs_list": [ 00:07:44.830 { 00:07:44.830 "name": "pt1", 00:07:44.830 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:44.830 "is_configured": true, 00:07:44.830 "data_offset": 2048, 00:07:44.830 "data_size": 63488 00:07:44.830 }, 00:07:44.830 { 00:07:44.830 "name": "pt2", 00:07:44.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:44.830 "is_configured": true, 00:07:44.830 "data_offset": 2048, 00:07:44.830 "data_size": 63488 00:07:44.830 }, 00:07:44.830 { 00:07:44.830 "name": "pt3", 00:07:44.830 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:44.830 "is_configured": true, 00:07:44.830 "data_offset": 2048, 00:07:44.830 "data_size": 63488 00:07:44.830 } 00:07:44.830 ] 00:07:44.830 }' 00:07:44.830 21:40:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.830 21:40:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.089 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:45.089 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:45.089 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:45.089 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:45.089 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:45.089 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:45.089 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:45.089 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:45.089 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.089 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.089 [2024-11-27 21:40:08.094504] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.089 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.089 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:45.089 "name": "raid_bdev1", 00:07:45.089 "aliases": [ 00:07:45.089 "27a7d243-7640-449f-ab8c-cf23e3481f03" 00:07:45.089 ], 00:07:45.089 "product_name": "Raid Volume", 00:07:45.089 "block_size": 512, 00:07:45.089 "num_blocks": 190464, 00:07:45.089 "uuid": "27a7d243-7640-449f-ab8c-cf23e3481f03", 00:07:45.089 "assigned_rate_limits": { 00:07:45.089 "rw_ios_per_sec": 0, 00:07:45.089 "rw_mbytes_per_sec": 0, 00:07:45.089 "r_mbytes_per_sec": 0, 00:07:45.089 "w_mbytes_per_sec": 0 00:07:45.089 }, 00:07:45.089 "claimed": false, 00:07:45.089 "zoned": false, 00:07:45.089 "supported_io_types": { 00:07:45.089 "read": true, 00:07:45.089 "write": true, 00:07:45.089 "unmap": true, 00:07:45.089 "flush": true, 00:07:45.089 "reset": true, 00:07:45.089 "nvme_admin": false, 00:07:45.089 "nvme_io": false, 00:07:45.089 "nvme_io_md": false, 00:07:45.089 "write_zeroes": true, 00:07:45.090 "zcopy": false, 00:07:45.090 "get_zone_info": false, 00:07:45.090 "zone_management": false, 00:07:45.090 "zone_append": false, 00:07:45.090 "compare": false, 00:07:45.090 "compare_and_write": false, 00:07:45.090 "abort": false, 00:07:45.090 "seek_hole": false, 00:07:45.090 "seek_data": false, 00:07:45.090 "copy": false, 00:07:45.090 "nvme_iov_md": false 00:07:45.090 }, 00:07:45.090 "memory_domains": [ 00:07:45.090 { 00:07:45.090 "dma_device_id": "system", 00:07:45.090 "dma_device_type": 1 00:07:45.090 }, 00:07:45.090 { 00:07:45.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.090 "dma_device_type": 2 00:07:45.090 }, 00:07:45.090 { 00:07:45.090 "dma_device_id": "system", 00:07:45.090 "dma_device_type": 1 00:07:45.090 }, 00:07:45.090 { 00:07:45.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.090 "dma_device_type": 2 00:07:45.090 }, 00:07:45.090 { 00:07:45.090 "dma_device_id": "system", 00:07:45.090 "dma_device_type": 1 00:07:45.090 }, 00:07:45.090 { 00:07:45.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.090 "dma_device_type": 2 00:07:45.090 } 00:07:45.090 ], 00:07:45.090 "driver_specific": { 00:07:45.090 "raid": { 00:07:45.090 "uuid": "27a7d243-7640-449f-ab8c-cf23e3481f03", 00:07:45.090 "strip_size_kb": 64, 00:07:45.090 "state": "online", 00:07:45.090 "raid_level": "raid0", 00:07:45.090 "superblock": true, 00:07:45.090 "num_base_bdevs": 3, 00:07:45.090 "num_base_bdevs_discovered": 3, 00:07:45.090 "num_base_bdevs_operational": 3, 00:07:45.090 "base_bdevs_list": [ 00:07:45.090 { 00:07:45.090 "name": "pt1", 00:07:45.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.090 "is_configured": true, 00:07:45.090 "data_offset": 2048, 00:07:45.090 "data_size": 63488 00:07:45.090 }, 00:07:45.090 { 00:07:45.090 "name": "pt2", 00:07:45.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.090 "is_configured": true, 00:07:45.090 "data_offset": 2048, 00:07:45.090 "data_size": 63488 00:07:45.090 }, 00:07:45.090 { 00:07:45.090 "name": "pt3", 00:07:45.090 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:45.090 "is_configured": true, 00:07:45.090 "data_offset": 2048, 00:07:45.090 "data_size": 63488 00:07:45.090 } 00:07:45.090 ] 00:07:45.090 } 00:07:45.090 } 00:07:45.090 }' 00:07:45.090 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:45.090 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:45.090 pt2 00:07:45.090 pt3' 00:07:45.090 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.350 [2024-11-27 21:40:08.369969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=27a7d243-7640-449f-ab8c-cf23e3481f03 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 27a7d243-7640-449f-ab8c-cf23e3481f03 ']' 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.350 [2024-11-27 21:40:08.417624] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.350 [2024-11-27 21:40:08.417685] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.350 [2024-11-27 21:40:08.417795] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.350 [2024-11-27 21:40:08.417882] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.350 [2024-11-27 21:40:08.417905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.350 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.610 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.611 [2024-11-27 21:40:08.565417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:45.611 [2024-11-27 21:40:08.567287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:45.611 [2024-11-27 21:40:08.567328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:07:45.611 [2024-11-27 21:40:08.567375] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:45.611 [2024-11-27 21:40:08.567424] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:45.611 [2024-11-27 21:40:08.567469] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:07:45.611 [2024-11-27 21:40:08.567481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.611 [2024-11-27 21:40:08.567491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:07:45.611 request: 00:07:45.611 { 00:07:45.611 "name": "raid_bdev1", 00:07:45.611 "raid_level": "raid0", 00:07:45.611 "base_bdevs": [ 00:07:45.611 "malloc1", 00:07:45.611 "malloc2", 00:07:45.611 "malloc3" 00:07:45.611 ], 00:07:45.611 "strip_size_kb": 64, 00:07:45.611 "superblock": false, 00:07:45.611 "method": "bdev_raid_create", 00:07:45.611 "req_id": 1 00:07:45.611 } 00:07:45.611 Got JSON-RPC error response 00:07:45.611 response: 00:07:45.611 { 00:07:45.611 "code": -17, 00:07:45.611 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:45.611 } 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.611 [2024-11-27 21:40:08.633262] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:45.611 [2024-11-27 21:40:08.633349] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.611 [2024-11-27 21:40:08.633380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:45.611 [2024-11-27 21:40:08.633428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.611 [2024-11-27 21:40:08.635562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.611 [2024-11-27 21:40:08.635630] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:45.611 [2024-11-27 21:40:08.635715] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:45.611 [2024-11-27 21:40:08.635813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:45.611 pt1 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.611 "name": "raid_bdev1", 00:07:45.611 "uuid": "27a7d243-7640-449f-ab8c-cf23e3481f03", 00:07:45.611 "strip_size_kb": 64, 00:07:45.611 "state": "configuring", 00:07:45.611 "raid_level": "raid0", 00:07:45.611 "superblock": true, 00:07:45.611 "num_base_bdevs": 3, 00:07:45.611 "num_base_bdevs_discovered": 1, 00:07:45.611 "num_base_bdevs_operational": 3, 00:07:45.611 "base_bdevs_list": [ 00:07:45.611 { 00:07:45.611 "name": "pt1", 00:07:45.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:45.611 "is_configured": true, 00:07:45.611 "data_offset": 2048, 00:07:45.611 "data_size": 63488 00:07:45.611 }, 00:07:45.611 { 00:07:45.611 "name": null, 00:07:45.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:45.611 "is_configured": false, 00:07:45.611 "data_offset": 2048, 00:07:45.611 "data_size": 63488 00:07:45.611 }, 00:07:45.611 { 00:07:45.611 "name": null, 00:07:45.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:45.611 "is_configured": false, 00:07:45.611 "data_offset": 2048, 00:07:45.611 "data_size": 63488 00:07:45.611 } 00:07:45.611 ] 00:07:45.611 }' 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.611 21:40:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.181 [2024-11-27 21:40:09.068623] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:46.181 [2024-11-27 21:40:09.068696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.181 [2024-11-27 21:40:09.068718] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:46.181 [2024-11-27 21:40:09.068731] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.181 [2024-11-27 21:40:09.069184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.181 [2024-11-27 21:40:09.069222] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:46.181 [2024-11-27 21:40:09.069306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:46.181 [2024-11-27 21:40:09.069332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:46.181 pt2 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.181 [2024-11-27 21:40:09.076610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.181 "name": "raid_bdev1", 00:07:46.181 "uuid": "27a7d243-7640-449f-ab8c-cf23e3481f03", 00:07:46.181 "strip_size_kb": 64, 00:07:46.181 "state": "configuring", 00:07:46.181 "raid_level": "raid0", 00:07:46.181 "superblock": true, 00:07:46.181 "num_base_bdevs": 3, 00:07:46.181 "num_base_bdevs_discovered": 1, 00:07:46.181 "num_base_bdevs_operational": 3, 00:07:46.181 "base_bdevs_list": [ 00:07:46.181 { 00:07:46.181 "name": "pt1", 00:07:46.181 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.181 "is_configured": true, 00:07:46.181 "data_offset": 2048, 00:07:46.181 "data_size": 63488 00:07:46.181 }, 00:07:46.181 { 00:07:46.181 "name": null, 00:07:46.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.181 "is_configured": false, 00:07:46.181 "data_offset": 0, 00:07:46.181 "data_size": 63488 00:07:46.181 }, 00:07:46.181 { 00:07:46.181 "name": null, 00:07:46.181 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:46.181 "is_configured": false, 00:07:46.181 "data_offset": 2048, 00:07:46.181 "data_size": 63488 00:07:46.181 } 00:07:46.181 ] 00:07:46.181 }' 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.181 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.443 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:46.443 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:46.443 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:46.443 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.443 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.443 [2024-11-27 21:40:09.475926] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:46.443 [2024-11-27 21:40:09.476026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.443 [2024-11-27 21:40:09.476061] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:46.443 [2024-11-27 21:40:09.476088] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.443 [2024-11-27 21:40:09.476562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.443 [2024-11-27 21:40:09.476617] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:46.444 [2024-11-27 21:40:09.476728] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:46.444 [2024-11-27 21:40:09.476779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:46.444 pt2 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.444 [2024-11-27 21:40:09.487896] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:07:46.444 [2024-11-27 21:40:09.487984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.444 [2024-11-27 21:40:09.488017] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:46.444 [2024-11-27 21:40:09.488042] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.444 [2024-11-27 21:40:09.488423] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.444 [2024-11-27 21:40:09.488474] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:07:46.444 [2024-11-27 21:40:09.488569] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:07:46.444 [2024-11-27 21:40:09.488616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:07:46.444 [2024-11-27 21:40:09.488744] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:46.444 [2024-11-27 21:40:09.488781] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:46.444 [2024-11-27 21:40:09.489061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:07:46.444 [2024-11-27 21:40:09.489223] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:46.444 [2024-11-27 21:40:09.489268] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:07:46.444 [2024-11-27 21:40:09.489437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.444 pt3 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.444 "name": "raid_bdev1", 00:07:46.444 "uuid": "27a7d243-7640-449f-ab8c-cf23e3481f03", 00:07:46.444 "strip_size_kb": 64, 00:07:46.444 "state": "online", 00:07:46.444 "raid_level": "raid0", 00:07:46.444 "superblock": true, 00:07:46.444 "num_base_bdevs": 3, 00:07:46.444 "num_base_bdevs_discovered": 3, 00:07:46.444 "num_base_bdevs_operational": 3, 00:07:46.444 "base_bdevs_list": [ 00:07:46.444 { 00:07:46.444 "name": "pt1", 00:07:46.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.444 "is_configured": true, 00:07:46.444 "data_offset": 2048, 00:07:46.444 "data_size": 63488 00:07:46.444 }, 00:07:46.444 { 00:07:46.444 "name": "pt2", 00:07:46.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.444 "is_configured": true, 00:07:46.444 "data_offset": 2048, 00:07:46.444 "data_size": 63488 00:07:46.444 }, 00:07:46.444 { 00:07:46.444 "name": "pt3", 00:07:46.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:46.444 "is_configured": true, 00:07:46.444 "data_offset": 2048, 00:07:46.444 "data_size": 63488 00:07:46.444 } 00:07:46.444 ] 00:07:46.444 }' 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.444 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.013 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:47.013 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:47.013 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.013 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.013 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.013 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.013 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:47.013 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.013 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.013 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.013 [2024-11-27 21:40:09.939422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.013 21:40:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.013 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.013 "name": "raid_bdev1", 00:07:47.013 "aliases": [ 00:07:47.013 "27a7d243-7640-449f-ab8c-cf23e3481f03" 00:07:47.013 ], 00:07:47.013 "product_name": "Raid Volume", 00:07:47.013 "block_size": 512, 00:07:47.013 "num_blocks": 190464, 00:07:47.013 "uuid": "27a7d243-7640-449f-ab8c-cf23e3481f03", 00:07:47.013 "assigned_rate_limits": { 00:07:47.013 "rw_ios_per_sec": 0, 00:07:47.013 "rw_mbytes_per_sec": 0, 00:07:47.013 "r_mbytes_per_sec": 0, 00:07:47.013 "w_mbytes_per_sec": 0 00:07:47.013 }, 00:07:47.013 "claimed": false, 00:07:47.013 "zoned": false, 00:07:47.013 "supported_io_types": { 00:07:47.013 "read": true, 00:07:47.013 "write": true, 00:07:47.013 "unmap": true, 00:07:47.013 "flush": true, 00:07:47.013 "reset": true, 00:07:47.013 "nvme_admin": false, 00:07:47.013 "nvme_io": false, 00:07:47.013 "nvme_io_md": false, 00:07:47.013 "write_zeroes": true, 00:07:47.013 "zcopy": false, 00:07:47.013 "get_zone_info": false, 00:07:47.013 "zone_management": false, 00:07:47.013 "zone_append": false, 00:07:47.013 "compare": false, 00:07:47.013 "compare_and_write": false, 00:07:47.013 "abort": false, 00:07:47.013 "seek_hole": false, 00:07:47.013 "seek_data": false, 00:07:47.013 "copy": false, 00:07:47.013 "nvme_iov_md": false 00:07:47.013 }, 00:07:47.013 "memory_domains": [ 00:07:47.013 { 00:07:47.013 "dma_device_id": "system", 00:07:47.013 "dma_device_type": 1 00:07:47.013 }, 00:07:47.013 { 00:07:47.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.014 "dma_device_type": 2 00:07:47.014 }, 00:07:47.014 { 00:07:47.014 "dma_device_id": "system", 00:07:47.014 "dma_device_type": 1 00:07:47.014 }, 00:07:47.014 { 00:07:47.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.014 "dma_device_type": 2 00:07:47.014 }, 00:07:47.014 { 00:07:47.014 "dma_device_id": "system", 00:07:47.014 "dma_device_type": 1 00:07:47.014 }, 00:07:47.014 { 00:07:47.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.014 "dma_device_type": 2 00:07:47.014 } 00:07:47.014 ], 00:07:47.014 "driver_specific": { 00:07:47.014 "raid": { 00:07:47.014 "uuid": "27a7d243-7640-449f-ab8c-cf23e3481f03", 00:07:47.014 "strip_size_kb": 64, 00:07:47.014 "state": "online", 00:07:47.014 "raid_level": "raid0", 00:07:47.014 "superblock": true, 00:07:47.014 "num_base_bdevs": 3, 00:07:47.014 "num_base_bdevs_discovered": 3, 00:07:47.014 "num_base_bdevs_operational": 3, 00:07:47.014 "base_bdevs_list": [ 00:07:47.014 { 00:07:47.014 "name": "pt1", 00:07:47.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.014 "is_configured": true, 00:07:47.014 "data_offset": 2048, 00:07:47.014 "data_size": 63488 00:07:47.014 }, 00:07:47.014 { 00:07:47.014 "name": "pt2", 00:07:47.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.014 "is_configured": true, 00:07:47.014 "data_offset": 2048, 00:07:47.014 "data_size": 63488 00:07:47.014 }, 00:07:47.014 { 00:07:47.014 "name": "pt3", 00:07:47.014 "uuid": "00000000-0000-0000-0000-000000000003", 00:07:47.014 "is_configured": true, 00:07:47.014 "data_offset": 2048, 00:07:47.014 "data_size": 63488 00:07:47.014 } 00:07:47.014 ] 00:07:47.014 } 00:07:47.014 } 00:07:47.014 }' 00:07:47.014 21:40:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:47.014 pt2 00:07:47.014 pt3' 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.014 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:47.274 [2024-11-27 21:40:10.230891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 27a7d243-7640-449f-ab8c-cf23e3481f03 '!=' 27a7d243-7640-449f-ab8c-cf23e3481f03 ']' 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75981 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75981 ']' 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75981 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75981 00:07:47.274 killing process with pid 75981 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75981' 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75981 00:07:47.274 [2024-11-27 21:40:10.304242] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:47.274 [2024-11-27 21:40:10.304317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:47.274 [2024-11-27 21:40:10.304379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:47.274 [2024-11-27 21:40:10.304388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:07:47.274 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75981 00:07:47.274 [2024-11-27 21:40:10.336159] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.534 21:40:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:47.534 00:07:47.534 real 0m3.886s 00:07:47.534 user 0m6.175s 00:07:47.534 sys 0m0.825s 00:07:47.534 ************************************ 00:07:47.534 END TEST raid_superblock_test 00:07:47.534 ************************************ 00:07:47.534 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.534 21:40:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.534 21:40:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:07:47.534 21:40:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:47.534 21:40:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.534 21:40:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.534 ************************************ 00:07:47.534 START TEST raid_read_error_test 00:07:47.534 ************************************ 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WcySIALqDV 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76222 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76222 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76222 ']' 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.534 21:40:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.794 [2024-11-27 21:40:10.698522] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:47.794 [2024-11-27 21:40:10.698644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76222 ] 00:07:47.794 [2024-11-27 21:40:10.830081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.794 [2024-11-27 21:40:10.855873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.794 [2024-11-27 21:40:10.898252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.794 [2024-11-27 21:40:10.898288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.733 BaseBdev1_malloc 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.733 true 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.733 [2024-11-27 21:40:11.553037] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:48.733 [2024-11-27 21:40:11.553137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.733 [2024-11-27 21:40:11.553169] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:48.733 [2024-11-27 21:40:11.553180] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.733 [2024-11-27 21:40:11.555296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.733 [2024-11-27 21:40:11.555334] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:48.733 BaseBdev1 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.733 BaseBdev2_malloc 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.733 true 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.733 [2024-11-27 21:40:11.593442] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:48.733 [2024-11-27 21:40:11.593486] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.733 [2024-11-27 21:40:11.593518] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:48.733 [2024-11-27 21:40:11.593534] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.733 [2024-11-27 21:40:11.595576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.733 [2024-11-27 21:40:11.595613] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:48.733 BaseBdev2 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.733 BaseBdev3_malloc 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.733 true 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.733 [2024-11-27 21:40:11.633782] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:48.733 [2024-11-27 21:40:11.633852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.733 [2024-11-27 21:40:11.633871] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:48.733 [2024-11-27 21:40:11.633879] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.733 [2024-11-27 21:40:11.635880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.733 [2024-11-27 21:40:11.635921] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:48.733 BaseBdev3 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.733 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.734 [2024-11-27 21:40:11.645814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:48.734 [2024-11-27 21:40:11.647575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:48.734 [2024-11-27 21:40:11.647713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:48.734 [2024-11-27 21:40:11.647897] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:48.734 [2024-11-27 21:40:11.647913] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:48.734 [2024-11-27 21:40:11.648152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:07:48.734 [2024-11-27 21:40:11.648293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:48.734 [2024-11-27 21:40:11.648308] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:48.734 [2024-11-27 21:40:11.648460] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.734 "name": "raid_bdev1", 00:07:48.734 "uuid": "4cffd0cc-56e5-4562-a732-25616bae7cd5", 00:07:48.734 "strip_size_kb": 64, 00:07:48.734 "state": "online", 00:07:48.734 "raid_level": "raid0", 00:07:48.734 "superblock": true, 00:07:48.734 "num_base_bdevs": 3, 00:07:48.734 "num_base_bdevs_discovered": 3, 00:07:48.734 "num_base_bdevs_operational": 3, 00:07:48.734 "base_bdevs_list": [ 00:07:48.734 { 00:07:48.734 "name": "BaseBdev1", 00:07:48.734 "uuid": "fa60a1e9-8fa0-5945-b8ac-b0ce4f2dd643", 00:07:48.734 "is_configured": true, 00:07:48.734 "data_offset": 2048, 00:07:48.734 "data_size": 63488 00:07:48.734 }, 00:07:48.734 { 00:07:48.734 "name": "BaseBdev2", 00:07:48.734 "uuid": "c73a5ec3-f569-5faf-bebc-2021da969f68", 00:07:48.734 "is_configured": true, 00:07:48.734 "data_offset": 2048, 00:07:48.734 "data_size": 63488 00:07:48.734 }, 00:07:48.734 { 00:07:48.734 "name": "BaseBdev3", 00:07:48.734 "uuid": "25730add-0ffa-5d31-9688-1924f2edb9b0", 00:07:48.734 "is_configured": true, 00:07:48.734 "data_offset": 2048, 00:07:48.734 "data_size": 63488 00:07:48.734 } 00:07:48.734 ] 00:07:48.734 }' 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.734 21:40:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.993 21:40:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:48.993 21:40:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:49.252 [2024-11-27 21:40:12.129386] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.191 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.192 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.192 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.192 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.192 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.192 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.192 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.192 "name": "raid_bdev1", 00:07:50.192 "uuid": "4cffd0cc-56e5-4562-a732-25616bae7cd5", 00:07:50.192 "strip_size_kb": 64, 00:07:50.192 "state": "online", 00:07:50.192 "raid_level": "raid0", 00:07:50.192 "superblock": true, 00:07:50.192 "num_base_bdevs": 3, 00:07:50.192 "num_base_bdevs_discovered": 3, 00:07:50.192 "num_base_bdevs_operational": 3, 00:07:50.192 "base_bdevs_list": [ 00:07:50.192 { 00:07:50.192 "name": "BaseBdev1", 00:07:50.192 "uuid": "fa60a1e9-8fa0-5945-b8ac-b0ce4f2dd643", 00:07:50.192 "is_configured": true, 00:07:50.192 "data_offset": 2048, 00:07:50.192 "data_size": 63488 00:07:50.192 }, 00:07:50.192 { 00:07:50.192 "name": "BaseBdev2", 00:07:50.192 "uuid": "c73a5ec3-f569-5faf-bebc-2021da969f68", 00:07:50.192 "is_configured": true, 00:07:50.192 "data_offset": 2048, 00:07:50.192 "data_size": 63488 00:07:50.192 }, 00:07:50.192 { 00:07:50.192 "name": "BaseBdev3", 00:07:50.192 "uuid": "25730add-0ffa-5d31-9688-1924f2edb9b0", 00:07:50.192 "is_configured": true, 00:07:50.192 "data_offset": 2048, 00:07:50.192 "data_size": 63488 00:07:50.192 } 00:07:50.192 ] 00:07:50.192 }' 00:07:50.192 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.192 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.452 [2024-11-27 21:40:13.489131] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.452 [2024-11-27 21:40:13.489240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.452 [2024-11-27 21:40:13.491904] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.452 [2024-11-27 21:40:13.491959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.452 [2024-11-27 21:40:13.491996] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.452 [2024-11-27 21:40:13.492013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:07:50.452 { 00:07:50.452 "results": [ 00:07:50.452 { 00:07:50.452 "job": "raid_bdev1", 00:07:50.452 "core_mask": "0x1", 00:07:50.452 "workload": "randrw", 00:07:50.452 "percentage": 50, 00:07:50.452 "status": "finished", 00:07:50.452 "queue_depth": 1, 00:07:50.452 "io_size": 131072, 00:07:50.452 "runtime": 1.360704, 00:07:50.452 "iops": 16758.971826348712, 00:07:50.452 "mibps": 2094.871478293589, 00:07:50.452 "io_failed": 1, 00:07:50.452 "io_timeout": 0, 00:07:50.452 "avg_latency_us": 82.37111504506117, 00:07:50.452 "min_latency_us": 20.12227074235808, 00:07:50.452 "max_latency_us": 1352.216593886463 00:07:50.452 } 00:07:50.452 ], 00:07:50.452 "core_count": 1 00:07:50.452 } 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76222 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76222 ']' 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76222 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76222 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.452 killing process with pid 76222 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76222' 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76222 00:07:50.452 [2024-11-27 21:40:13.541514] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:50.452 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76222 00:07:50.452 [2024-11-27 21:40:13.565871] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.712 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:50.712 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WcySIALqDV 00:07:50.712 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:50.712 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:50.712 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:50.712 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:50.712 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:50.712 ************************************ 00:07:50.712 END TEST raid_read_error_test 00:07:50.712 ************************************ 00:07:50.712 21:40:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:50.712 00:07:50.712 real 0m3.171s 00:07:50.712 user 0m4.021s 00:07:50.712 sys 0m0.494s 00:07:50.712 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.712 21:40:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.712 21:40:13 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:07:50.712 21:40:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:50.712 21:40:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.712 21:40:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.972 ************************************ 00:07:50.972 START TEST raid_write_error_test 00:07:50.972 ************************************ 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oGN7d53IWi 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76352 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76352 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76352 ']' 00:07:50.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.972 21:40:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.972 [2024-11-27 21:40:13.937803] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:50.972 [2024-11-27 21:40:13.938013] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76352 ] 00:07:50.972 [2024-11-27 21:40:14.089604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.232 [2024-11-27 21:40:14.114557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.232 [2024-11-27 21:40:14.156588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.232 [2024-11-27 21:40:14.156623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.801 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.801 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.802 BaseBdev1_malloc 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.802 true 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.802 [2024-11-27 21:40:14.779483] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:51.802 [2024-11-27 21:40:14.779534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.802 [2024-11-27 21:40:14.779551] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:07:51.802 [2024-11-27 21:40:14.779567] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.802 [2024-11-27 21:40:14.781780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.802 [2024-11-27 21:40:14.781845] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:51.802 BaseBdev1 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.802 BaseBdev2_malloc 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.802 true 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.802 [2024-11-27 21:40:14.819784] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:51.802 [2024-11-27 21:40:14.819852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.802 [2024-11-27 21:40:14.819871] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:07:51.802 [2024-11-27 21:40:14.819887] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.802 [2024-11-27 21:40:14.822018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.802 [2024-11-27 21:40:14.822105] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:51.802 BaseBdev2 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.802 BaseBdev3_malloc 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.802 true 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.802 [2024-11-27 21:40:14.860135] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:07:51.802 [2024-11-27 21:40:14.860195] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.802 [2024-11-27 21:40:14.860212] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:07:51.802 [2024-11-27 21:40:14.860221] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.802 [2024-11-27 21:40:14.862337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.802 [2024-11-27 21:40:14.862371] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:07:51.802 BaseBdev3 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.802 [2024-11-27 21:40:14.872162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.802 [2024-11-27 21:40:14.874035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.802 [2024-11-27 21:40:14.874108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:51.802 [2024-11-27 21:40:14.874272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:07:51.802 [2024-11-27 21:40:14.874290] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:07:51.802 [2024-11-27 21:40:14.874565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:07:51.802 [2024-11-27 21:40:14.874737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:07:51.802 [2024-11-27 21:40:14.874747] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:07:51.802 [2024-11-27 21:40:14.874890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.802 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.062 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.062 "name": "raid_bdev1", 00:07:52.062 "uuid": "65bea308-5d27-4ccb-ba6d-30a53abcdefb", 00:07:52.062 "strip_size_kb": 64, 00:07:52.062 "state": "online", 00:07:52.062 "raid_level": "raid0", 00:07:52.062 "superblock": true, 00:07:52.062 "num_base_bdevs": 3, 00:07:52.062 "num_base_bdevs_discovered": 3, 00:07:52.062 "num_base_bdevs_operational": 3, 00:07:52.062 "base_bdevs_list": [ 00:07:52.062 { 00:07:52.062 "name": "BaseBdev1", 00:07:52.062 "uuid": "07720b3b-9e52-5988-9cc3-4ad0ec2aa9f8", 00:07:52.062 "is_configured": true, 00:07:52.062 "data_offset": 2048, 00:07:52.062 "data_size": 63488 00:07:52.062 }, 00:07:52.062 { 00:07:52.062 "name": "BaseBdev2", 00:07:52.062 "uuid": "568f005f-c040-5e54-b071-92c9ad075806", 00:07:52.062 "is_configured": true, 00:07:52.062 "data_offset": 2048, 00:07:52.062 "data_size": 63488 00:07:52.062 }, 00:07:52.062 { 00:07:52.062 "name": "BaseBdev3", 00:07:52.062 "uuid": "ed548470-5495-560e-b205-7dceb6f4e486", 00:07:52.062 "is_configured": true, 00:07:52.062 "data_offset": 2048, 00:07:52.062 "data_size": 63488 00:07:52.062 } 00:07:52.062 ] 00:07:52.062 }' 00:07:52.062 21:40:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.062 21:40:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.322 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:52.322 21:40:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:52.322 [2024-11-27 21:40:15.423563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:07:53.260 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:53.260 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.260 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.260 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.260 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:53.260 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:53.260 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:07:53.260 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:07:53.260 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.261 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.261 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:53.261 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.261 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:53.261 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.261 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.261 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.261 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.261 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.261 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.261 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.261 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.261 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.520 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.520 "name": "raid_bdev1", 00:07:53.520 "uuid": "65bea308-5d27-4ccb-ba6d-30a53abcdefb", 00:07:53.520 "strip_size_kb": 64, 00:07:53.520 "state": "online", 00:07:53.520 "raid_level": "raid0", 00:07:53.520 "superblock": true, 00:07:53.520 "num_base_bdevs": 3, 00:07:53.520 "num_base_bdevs_discovered": 3, 00:07:53.520 "num_base_bdevs_operational": 3, 00:07:53.520 "base_bdevs_list": [ 00:07:53.520 { 00:07:53.520 "name": "BaseBdev1", 00:07:53.520 "uuid": "07720b3b-9e52-5988-9cc3-4ad0ec2aa9f8", 00:07:53.520 "is_configured": true, 00:07:53.520 "data_offset": 2048, 00:07:53.520 "data_size": 63488 00:07:53.520 }, 00:07:53.520 { 00:07:53.520 "name": "BaseBdev2", 00:07:53.520 "uuid": "568f005f-c040-5e54-b071-92c9ad075806", 00:07:53.520 "is_configured": true, 00:07:53.520 "data_offset": 2048, 00:07:53.520 "data_size": 63488 00:07:53.520 }, 00:07:53.520 { 00:07:53.520 "name": "BaseBdev3", 00:07:53.520 "uuid": "ed548470-5495-560e-b205-7dceb6f4e486", 00:07:53.520 "is_configured": true, 00:07:53.520 "data_offset": 2048, 00:07:53.520 "data_size": 63488 00:07:53.520 } 00:07:53.520 ] 00:07:53.520 }' 00:07:53.520 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.520 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.780 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:53.780 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.780 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.780 [2024-11-27 21:40:16.791282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:53.780 [2024-11-27 21:40:16.791374] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.780 [2024-11-27 21:40:16.793963] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.780 [2024-11-27 21:40:16.794047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.780 [2024-11-27 21:40:16.794099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.780 [2024-11-27 21:40:16.794141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:07:53.780 { 00:07:53.780 "results": [ 00:07:53.780 { 00:07:53.780 "job": "raid_bdev1", 00:07:53.780 "core_mask": "0x1", 00:07:53.780 "workload": "randrw", 00:07:53.780 "percentage": 50, 00:07:53.780 "status": "finished", 00:07:53.780 "queue_depth": 1, 00:07:53.780 "io_size": 131072, 00:07:53.780 "runtime": 1.368644, 00:07:53.780 "iops": 16785.957487849286, 00:07:53.780 "mibps": 2098.2446859811607, 00:07:53.780 "io_failed": 1, 00:07:53.780 "io_timeout": 0, 00:07:53.780 "avg_latency_us": 82.3028924357689, 00:07:53.780 "min_latency_us": 25.152838427947597, 00:07:53.780 "max_latency_us": 1423.7624454148472 00:07:53.780 } 00:07:53.780 ], 00:07:53.780 "core_count": 1 00:07:53.780 } 00:07:53.780 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.780 21:40:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76352 00:07:53.780 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76352 ']' 00:07:53.780 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76352 00:07:53.780 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:53.780 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.780 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76352 00:07:53.781 killing process with pid 76352 00:07:53.781 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.781 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.781 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76352' 00:07:53.781 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76352 00:07:53.781 [2024-11-27 21:40:16.840182] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.781 21:40:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76352 00:07:53.781 [2024-11-27 21:40:16.864192] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.041 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:54.041 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oGN7d53IWi 00:07:54.041 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:54.041 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:54.041 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:54.041 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:54.041 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:54.041 ************************************ 00:07:54.041 END TEST raid_write_error_test 00:07:54.041 ************************************ 00:07:54.041 21:40:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:54.041 00:07:54.041 real 0m3.235s 00:07:54.041 user 0m4.142s 00:07:54.041 sys 0m0.498s 00:07:54.041 21:40:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.041 21:40:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.041 21:40:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:54.041 21:40:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:07:54.041 21:40:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:54.041 21:40:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.041 21:40:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.041 ************************************ 00:07:54.041 START TEST raid_state_function_test 00:07:54.041 ************************************ 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:54.041 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:54.301 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:54.301 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:54.301 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:54.301 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:54.301 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=76485 00:07:54.301 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:54.301 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 76485' 00:07:54.301 Process raid pid: 76485 00:07:54.302 21:40:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 76485 00:07:54.302 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 76485 ']' 00:07:54.302 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.302 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.302 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.302 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.302 21:40:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.302 [2024-11-27 21:40:17.239433] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:07:54.302 [2024-11-27 21:40:17.239540] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.302 [2024-11-27 21:40:17.394043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.302 [2024-11-27 21:40:17.419445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.561 [2024-11-27 21:40:17.461564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.561 [2024-11-27 21:40:17.461606] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.130 [2024-11-27 21:40:18.072111] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:55.130 [2024-11-27 21:40:18.072171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:55.130 [2024-11-27 21:40:18.072182] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.130 [2024-11-27 21:40:18.072192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.130 [2024-11-27 21:40:18.072198] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:55.130 [2024-11-27 21:40:18.072211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.130 "name": "Existed_Raid", 00:07:55.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.130 "strip_size_kb": 64, 00:07:55.130 "state": "configuring", 00:07:55.130 "raid_level": "concat", 00:07:55.130 "superblock": false, 00:07:55.130 "num_base_bdevs": 3, 00:07:55.130 "num_base_bdevs_discovered": 0, 00:07:55.130 "num_base_bdevs_operational": 3, 00:07:55.130 "base_bdevs_list": [ 00:07:55.130 { 00:07:55.130 "name": "BaseBdev1", 00:07:55.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.130 "is_configured": false, 00:07:55.130 "data_offset": 0, 00:07:55.130 "data_size": 0 00:07:55.130 }, 00:07:55.130 { 00:07:55.130 "name": "BaseBdev2", 00:07:55.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.130 "is_configured": false, 00:07:55.130 "data_offset": 0, 00:07:55.130 "data_size": 0 00:07:55.130 }, 00:07:55.130 { 00:07:55.130 "name": "BaseBdev3", 00:07:55.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.130 "is_configured": false, 00:07:55.130 "data_offset": 0, 00:07:55.130 "data_size": 0 00:07:55.130 } 00:07:55.130 ] 00:07:55.130 }' 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.130 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.700 [2024-11-27 21:40:18.515247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:55.700 [2024-11-27 21:40:18.515343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.700 [2024-11-27 21:40:18.523251] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:55.700 [2024-11-27 21:40:18.523328] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:55.700 [2024-11-27 21:40:18.523355] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.700 [2024-11-27 21:40:18.523377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.700 [2024-11-27 21:40:18.523395] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:55.700 [2024-11-27 21:40:18.523416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.700 [2024-11-27 21:40:18.539968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.700 BaseBdev1 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.700 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.700 [ 00:07:55.700 { 00:07:55.700 "name": "BaseBdev1", 00:07:55.700 "aliases": [ 00:07:55.700 "232f586f-d628-4474-807d-9ce4677fcfc3" 00:07:55.700 ], 00:07:55.700 "product_name": "Malloc disk", 00:07:55.700 "block_size": 512, 00:07:55.700 "num_blocks": 65536, 00:07:55.700 "uuid": "232f586f-d628-4474-807d-9ce4677fcfc3", 00:07:55.700 "assigned_rate_limits": { 00:07:55.700 "rw_ios_per_sec": 0, 00:07:55.700 "rw_mbytes_per_sec": 0, 00:07:55.700 "r_mbytes_per_sec": 0, 00:07:55.700 "w_mbytes_per_sec": 0 00:07:55.700 }, 00:07:55.700 "claimed": true, 00:07:55.700 "claim_type": "exclusive_write", 00:07:55.700 "zoned": false, 00:07:55.700 "supported_io_types": { 00:07:55.701 "read": true, 00:07:55.701 "write": true, 00:07:55.701 "unmap": true, 00:07:55.701 "flush": true, 00:07:55.701 "reset": true, 00:07:55.701 "nvme_admin": false, 00:07:55.701 "nvme_io": false, 00:07:55.701 "nvme_io_md": false, 00:07:55.701 "write_zeroes": true, 00:07:55.701 "zcopy": true, 00:07:55.701 "get_zone_info": false, 00:07:55.701 "zone_management": false, 00:07:55.701 "zone_append": false, 00:07:55.701 "compare": false, 00:07:55.701 "compare_and_write": false, 00:07:55.701 "abort": true, 00:07:55.701 "seek_hole": false, 00:07:55.701 "seek_data": false, 00:07:55.701 "copy": true, 00:07:55.701 "nvme_iov_md": false 00:07:55.701 }, 00:07:55.701 "memory_domains": [ 00:07:55.701 { 00:07:55.701 "dma_device_id": "system", 00:07:55.701 "dma_device_type": 1 00:07:55.701 }, 00:07:55.701 { 00:07:55.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.701 "dma_device_type": 2 00:07:55.701 } 00:07:55.701 ], 00:07:55.701 "driver_specific": {} 00:07:55.701 } 00:07:55.701 ] 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.701 "name": "Existed_Raid", 00:07:55.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.701 "strip_size_kb": 64, 00:07:55.701 "state": "configuring", 00:07:55.701 "raid_level": "concat", 00:07:55.701 "superblock": false, 00:07:55.701 "num_base_bdevs": 3, 00:07:55.701 "num_base_bdevs_discovered": 1, 00:07:55.701 "num_base_bdevs_operational": 3, 00:07:55.701 "base_bdevs_list": [ 00:07:55.701 { 00:07:55.701 "name": "BaseBdev1", 00:07:55.701 "uuid": "232f586f-d628-4474-807d-9ce4677fcfc3", 00:07:55.701 "is_configured": true, 00:07:55.701 "data_offset": 0, 00:07:55.701 "data_size": 65536 00:07:55.701 }, 00:07:55.701 { 00:07:55.701 "name": "BaseBdev2", 00:07:55.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.701 "is_configured": false, 00:07:55.701 "data_offset": 0, 00:07:55.701 "data_size": 0 00:07:55.701 }, 00:07:55.701 { 00:07:55.701 "name": "BaseBdev3", 00:07:55.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.701 "is_configured": false, 00:07:55.701 "data_offset": 0, 00:07:55.701 "data_size": 0 00:07:55.701 } 00:07:55.701 ] 00:07:55.701 }' 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.701 21:40:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.961 [2024-11-27 21:40:19.015203] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:55.961 [2024-11-27 21:40:19.015256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.961 [2024-11-27 21:40:19.027214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:55.961 [2024-11-27 21:40:19.029102] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.961 [2024-11-27 21:40:19.029173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.961 [2024-11-27 21:40:19.029231] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:07:55.961 [2024-11-27 21:40:19.029272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.961 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.962 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.962 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.962 "name": "Existed_Raid", 00:07:55.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.962 "strip_size_kb": 64, 00:07:55.962 "state": "configuring", 00:07:55.962 "raid_level": "concat", 00:07:55.962 "superblock": false, 00:07:55.962 "num_base_bdevs": 3, 00:07:55.962 "num_base_bdevs_discovered": 1, 00:07:55.962 "num_base_bdevs_operational": 3, 00:07:55.962 "base_bdevs_list": [ 00:07:55.962 { 00:07:55.962 "name": "BaseBdev1", 00:07:55.962 "uuid": "232f586f-d628-4474-807d-9ce4677fcfc3", 00:07:55.962 "is_configured": true, 00:07:55.962 "data_offset": 0, 00:07:55.962 "data_size": 65536 00:07:55.962 }, 00:07:55.962 { 00:07:55.962 "name": "BaseBdev2", 00:07:55.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.962 "is_configured": false, 00:07:55.962 "data_offset": 0, 00:07:55.962 "data_size": 0 00:07:55.962 }, 00:07:55.962 { 00:07:55.962 "name": "BaseBdev3", 00:07:55.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.962 "is_configured": false, 00:07:55.962 "data_offset": 0, 00:07:55.962 "data_size": 0 00:07:55.962 } 00:07:55.962 ] 00:07:55.962 }' 00:07:55.962 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.962 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.575 [2024-11-27 21:40:19.505208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:56.575 BaseBdev2 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.575 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.575 [ 00:07:56.575 { 00:07:56.575 "name": "BaseBdev2", 00:07:56.576 "aliases": [ 00:07:56.576 "bde07d43-3b3c-4a89-bad9-92aa265c9ddd" 00:07:56.576 ], 00:07:56.576 "product_name": "Malloc disk", 00:07:56.576 "block_size": 512, 00:07:56.576 "num_blocks": 65536, 00:07:56.576 "uuid": "bde07d43-3b3c-4a89-bad9-92aa265c9ddd", 00:07:56.576 "assigned_rate_limits": { 00:07:56.576 "rw_ios_per_sec": 0, 00:07:56.576 "rw_mbytes_per_sec": 0, 00:07:56.576 "r_mbytes_per_sec": 0, 00:07:56.576 "w_mbytes_per_sec": 0 00:07:56.576 }, 00:07:56.576 "claimed": true, 00:07:56.576 "claim_type": "exclusive_write", 00:07:56.576 "zoned": false, 00:07:56.576 "supported_io_types": { 00:07:56.576 "read": true, 00:07:56.576 "write": true, 00:07:56.576 "unmap": true, 00:07:56.576 "flush": true, 00:07:56.576 "reset": true, 00:07:56.576 "nvme_admin": false, 00:07:56.576 "nvme_io": false, 00:07:56.576 "nvme_io_md": false, 00:07:56.576 "write_zeroes": true, 00:07:56.576 "zcopy": true, 00:07:56.576 "get_zone_info": false, 00:07:56.576 "zone_management": false, 00:07:56.576 "zone_append": false, 00:07:56.576 "compare": false, 00:07:56.576 "compare_and_write": false, 00:07:56.576 "abort": true, 00:07:56.576 "seek_hole": false, 00:07:56.576 "seek_data": false, 00:07:56.576 "copy": true, 00:07:56.576 "nvme_iov_md": false 00:07:56.576 }, 00:07:56.576 "memory_domains": [ 00:07:56.576 { 00:07:56.576 "dma_device_id": "system", 00:07:56.576 "dma_device_type": 1 00:07:56.576 }, 00:07:56.576 { 00:07:56.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.576 "dma_device_type": 2 00:07:56.576 } 00:07:56.576 ], 00:07:56.576 "driver_specific": {} 00:07:56.576 } 00:07:56.576 ] 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.576 "name": "Existed_Raid", 00:07:56.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.576 "strip_size_kb": 64, 00:07:56.576 "state": "configuring", 00:07:56.576 "raid_level": "concat", 00:07:56.576 "superblock": false, 00:07:56.576 "num_base_bdevs": 3, 00:07:56.576 "num_base_bdevs_discovered": 2, 00:07:56.576 "num_base_bdevs_operational": 3, 00:07:56.576 "base_bdevs_list": [ 00:07:56.576 { 00:07:56.576 "name": "BaseBdev1", 00:07:56.576 "uuid": "232f586f-d628-4474-807d-9ce4677fcfc3", 00:07:56.576 "is_configured": true, 00:07:56.576 "data_offset": 0, 00:07:56.576 "data_size": 65536 00:07:56.576 }, 00:07:56.576 { 00:07:56.576 "name": "BaseBdev2", 00:07:56.576 "uuid": "bde07d43-3b3c-4a89-bad9-92aa265c9ddd", 00:07:56.576 "is_configured": true, 00:07:56.576 "data_offset": 0, 00:07:56.576 "data_size": 65536 00:07:56.576 }, 00:07:56.576 { 00:07:56.576 "name": "BaseBdev3", 00:07:56.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.576 "is_configured": false, 00:07:56.576 "data_offset": 0, 00:07:56.576 "data_size": 0 00:07:56.576 } 00:07:56.576 ] 00:07:56.576 }' 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.576 21:40:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.144 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:57.144 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.144 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.144 [2024-11-27 21:40:20.035905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:57.144 [2024-11-27 21:40:20.036012] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:07:57.144 [2024-11-27 21:40:20.036031] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:57.144 [2024-11-27 21:40:20.036380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:07:57.144 [2024-11-27 21:40:20.036544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:07:57.144 [2024-11-27 21:40:20.036555] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:07:57.144 [2024-11-27 21:40:20.036772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.144 BaseBdev3 00:07:57.144 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.144 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:07:57.144 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:57.144 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:57.144 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:57.144 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:57.144 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:57.144 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:57.144 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.144 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.144 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.145 [ 00:07:57.145 { 00:07:57.145 "name": "BaseBdev3", 00:07:57.145 "aliases": [ 00:07:57.145 "bebebecd-de62-420e-92b6-5116d8af38a7" 00:07:57.145 ], 00:07:57.145 "product_name": "Malloc disk", 00:07:57.145 "block_size": 512, 00:07:57.145 "num_blocks": 65536, 00:07:57.145 "uuid": "bebebecd-de62-420e-92b6-5116d8af38a7", 00:07:57.145 "assigned_rate_limits": { 00:07:57.145 "rw_ios_per_sec": 0, 00:07:57.145 "rw_mbytes_per_sec": 0, 00:07:57.145 "r_mbytes_per_sec": 0, 00:07:57.145 "w_mbytes_per_sec": 0 00:07:57.145 }, 00:07:57.145 "claimed": true, 00:07:57.145 "claim_type": "exclusive_write", 00:07:57.145 "zoned": false, 00:07:57.145 "supported_io_types": { 00:07:57.145 "read": true, 00:07:57.145 "write": true, 00:07:57.145 "unmap": true, 00:07:57.145 "flush": true, 00:07:57.145 "reset": true, 00:07:57.145 "nvme_admin": false, 00:07:57.145 "nvme_io": false, 00:07:57.145 "nvme_io_md": false, 00:07:57.145 "write_zeroes": true, 00:07:57.145 "zcopy": true, 00:07:57.145 "get_zone_info": false, 00:07:57.145 "zone_management": false, 00:07:57.145 "zone_append": false, 00:07:57.145 "compare": false, 00:07:57.145 "compare_and_write": false, 00:07:57.145 "abort": true, 00:07:57.145 "seek_hole": false, 00:07:57.145 "seek_data": false, 00:07:57.145 "copy": true, 00:07:57.145 "nvme_iov_md": false 00:07:57.145 }, 00:07:57.145 "memory_domains": [ 00:07:57.145 { 00:07:57.145 "dma_device_id": "system", 00:07:57.145 "dma_device_type": 1 00:07:57.145 }, 00:07:57.145 { 00:07:57.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.145 "dma_device_type": 2 00:07:57.145 } 00:07:57.145 ], 00:07:57.145 "driver_specific": {} 00:07:57.145 } 00:07:57.145 ] 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.145 "name": "Existed_Raid", 00:07:57.145 "uuid": "a65134bf-25ae-49a8-9017-4024266cd9de", 00:07:57.145 "strip_size_kb": 64, 00:07:57.145 "state": "online", 00:07:57.145 "raid_level": "concat", 00:07:57.145 "superblock": false, 00:07:57.145 "num_base_bdevs": 3, 00:07:57.145 "num_base_bdevs_discovered": 3, 00:07:57.145 "num_base_bdevs_operational": 3, 00:07:57.145 "base_bdevs_list": [ 00:07:57.145 { 00:07:57.145 "name": "BaseBdev1", 00:07:57.145 "uuid": "232f586f-d628-4474-807d-9ce4677fcfc3", 00:07:57.145 "is_configured": true, 00:07:57.145 "data_offset": 0, 00:07:57.145 "data_size": 65536 00:07:57.145 }, 00:07:57.145 { 00:07:57.145 "name": "BaseBdev2", 00:07:57.145 "uuid": "bde07d43-3b3c-4a89-bad9-92aa265c9ddd", 00:07:57.145 "is_configured": true, 00:07:57.145 "data_offset": 0, 00:07:57.145 "data_size": 65536 00:07:57.145 }, 00:07:57.145 { 00:07:57.145 "name": "BaseBdev3", 00:07:57.145 "uuid": "bebebecd-de62-420e-92b6-5116d8af38a7", 00:07:57.145 "is_configured": true, 00:07:57.145 "data_offset": 0, 00:07:57.145 "data_size": 65536 00:07:57.145 } 00:07:57.145 ] 00:07:57.145 }' 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.145 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.404 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:57.404 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:57.404 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:57.404 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:57.404 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:57.404 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:57.404 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:57.404 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:57.404 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.404 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.664 [2024-11-27 21:40:20.531420] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.664 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.664 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:57.664 "name": "Existed_Raid", 00:07:57.664 "aliases": [ 00:07:57.664 "a65134bf-25ae-49a8-9017-4024266cd9de" 00:07:57.664 ], 00:07:57.664 "product_name": "Raid Volume", 00:07:57.665 "block_size": 512, 00:07:57.665 "num_blocks": 196608, 00:07:57.665 "uuid": "a65134bf-25ae-49a8-9017-4024266cd9de", 00:07:57.665 "assigned_rate_limits": { 00:07:57.665 "rw_ios_per_sec": 0, 00:07:57.665 "rw_mbytes_per_sec": 0, 00:07:57.665 "r_mbytes_per_sec": 0, 00:07:57.665 "w_mbytes_per_sec": 0 00:07:57.665 }, 00:07:57.665 "claimed": false, 00:07:57.665 "zoned": false, 00:07:57.665 "supported_io_types": { 00:07:57.665 "read": true, 00:07:57.665 "write": true, 00:07:57.665 "unmap": true, 00:07:57.665 "flush": true, 00:07:57.665 "reset": true, 00:07:57.665 "nvme_admin": false, 00:07:57.665 "nvme_io": false, 00:07:57.665 "nvme_io_md": false, 00:07:57.665 "write_zeroes": true, 00:07:57.665 "zcopy": false, 00:07:57.665 "get_zone_info": false, 00:07:57.665 "zone_management": false, 00:07:57.665 "zone_append": false, 00:07:57.665 "compare": false, 00:07:57.665 "compare_and_write": false, 00:07:57.665 "abort": false, 00:07:57.665 "seek_hole": false, 00:07:57.665 "seek_data": false, 00:07:57.665 "copy": false, 00:07:57.665 "nvme_iov_md": false 00:07:57.665 }, 00:07:57.665 "memory_domains": [ 00:07:57.665 { 00:07:57.665 "dma_device_id": "system", 00:07:57.665 "dma_device_type": 1 00:07:57.665 }, 00:07:57.665 { 00:07:57.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.665 "dma_device_type": 2 00:07:57.665 }, 00:07:57.665 { 00:07:57.665 "dma_device_id": "system", 00:07:57.665 "dma_device_type": 1 00:07:57.665 }, 00:07:57.665 { 00:07:57.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.665 "dma_device_type": 2 00:07:57.665 }, 00:07:57.665 { 00:07:57.665 "dma_device_id": "system", 00:07:57.665 "dma_device_type": 1 00:07:57.665 }, 00:07:57.665 { 00:07:57.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.665 "dma_device_type": 2 00:07:57.665 } 00:07:57.665 ], 00:07:57.665 "driver_specific": { 00:07:57.665 "raid": { 00:07:57.665 "uuid": "a65134bf-25ae-49a8-9017-4024266cd9de", 00:07:57.665 "strip_size_kb": 64, 00:07:57.665 "state": "online", 00:07:57.665 "raid_level": "concat", 00:07:57.665 "superblock": false, 00:07:57.665 "num_base_bdevs": 3, 00:07:57.665 "num_base_bdevs_discovered": 3, 00:07:57.665 "num_base_bdevs_operational": 3, 00:07:57.665 "base_bdevs_list": [ 00:07:57.665 { 00:07:57.665 "name": "BaseBdev1", 00:07:57.665 "uuid": "232f586f-d628-4474-807d-9ce4677fcfc3", 00:07:57.665 "is_configured": true, 00:07:57.665 "data_offset": 0, 00:07:57.665 "data_size": 65536 00:07:57.665 }, 00:07:57.665 { 00:07:57.665 "name": "BaseBdev2", 00:07:57.665 "uuid": "bde07d43-3b3c-4a89-bad9-92aa265c9ddd", 00:07:57.665 "is_configured": true, 00:07:57.665 "data_offset": 0, 00:07:57.665 "data_size": 65536 00:07:57.665 }, 00:07:57.665 { 00:07:57.665 "name": "BaseBdev3", 00:07:57.665 "uuid": "bebebecd-de62-420e-92b6-5116d8af38a7", 00:07:57.665 "is_configured": true, 00:07:57.665 "data_offset": 0, 00:07:57.665 "data_size": 65536 00:07:57.665 } 00:07:57.665 ] 00:07:57.665 } 00:07:57.665 } 00:07:57.665 }' 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:57.665 BaseBdev2 00:07:57.665 BaseBdev3' 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.665 [2024-11-27 21:40:20.762812] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:57.665 [2024-11-27 21:40:20.762880] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.665 [2024-11-27 21:40:20.762972] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.665 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.666 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.926 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.926 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.926 "name": "Existed_Raid", 00:07:57.926 "uuid": "a65134bf-25ae-49a8-9017-4024266cd9de", 00:07:57.926 "strip_size_kb": 64, 00:07:57.926 "state": "offline", 00:07:57.926 "raid_level": "concat", 00:07:57.926 "superblock": false, 00:07:57.926 "num_base_bdevs": 3, 00:07:57.926 "num_base_bdevs_discovered": 2, 00:07:57.926 "num_base_bdevs_operational": 2, 00:07:57.926 "base_bdevs_list": [ 00:07:57.926 { 00:07:57.926 "name": null, 00:07:57.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.926 "is_configured": false, 00:07:57.926 "data_offset": 0, 00:07:57.926 "data_size": 65536 00:07:57.926 }, 00:07:57.926 { 00:07:57.926 "name": "BaseBdev2", 00:07:57.926 "uuid": "bde07d43-3b3c-4a89-bad9-92aa265c9ddd", 00:07:57.926 "is_configured": true, 00:07:57.926 "data_offset": 0, 00:07:57.926 "data_size": 65536 00:07:57.926 }, 00:07:57.926 { 00:07:57.926 "name": "BaseBdev3", 00:07:57.926 "uuid": "bebebecd-de62-420e-92b6-5116d8af38a7", 00:07:57.926 "is_configured": true, 00:07:57.926 "data_offset": 0, 00:07:57.926 "data_size": 65536 00:07:57.926 } 00:07:57.926 ] 00:07:57.926 }' 00:07:57.926 21:40:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.926 21:40:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.186 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.187 [2024-11-27 21:40:21.237298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.187 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.187 [2024-11-27 21:40:21.304334] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:07:58.187 [2024-11-27 21:40:21.304428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 BaseBdev2 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 [ 00:07:58.448 { 00:07:58.448 "name": "BaseBdev2", 00:07:58.448 "aliases": [ 00:07:58.448 "9945b452-0e66-4c57-9d0c-ea613434f97c" 00:07:58.448 ], 00:07:58.448 "product_name": "Malloc disk", 00:07:58.448 "block_size": 512, 00:07:58.448 "num_blocks": 65536, 00:07:58.448 "uuid": "9945b452-0e66-4c57-9d0c-ea613434f97c", 00:07:58.448 "assigned_rate_limits": { 00:07:58.448 "rw_ios_per_sec": 0, 00:07:58.448 "rw_mbytes_per_sec": 0, 00:07:58.448 "r_mbytes_per_sec": 0, 00:07:58.448 "w_mbytes_per_sec": 0 00:07:58.448 }, 00:07:58.448 "claimed": false, 00:07:58.448 "zoned": false, 00:07:58.448 "supported_io_types": { 00:07:58.448 "read": true, 00:07:58.448 "write": true, 00:07:58.448 "unmap": true, 00:07:58.448 "flush": true, 00:07:58.448 "reset": true, 00:07:58.448 "nvme_admin": false, 00:07:58.448 "nvme_io": false, 00:07:58.448 "nvme_io_md": false, 00:07:58.448 "write_zeroes": true, 00:07:58.448 "zcopy": true, 00:07:58.448 "get_zone_info": false, 00:07:58.448 "zone_management": false, 00:07:58.448 "zone_append": false, 00:07:58.448 "compare": false, 00:07:58.448 "compare_and_write": false, 00:07:58.448 "abort": true, 00:07:58.448 "seek_hole": false, 00:07:58.448 "seek_data": false, 00:07:58.448 "copy": true, 00:07:58.448 "nvme_iov_md": false 00:07:58.448 }, 00:07:58.448 "memory_domains": [ 00:07:58.448 { 00:07:58.448 "dma_device_id": "system", 00:07:58.448 "dma_device_type": 1 00:07:58.448 }, 00:07:58.448 { 00:07:58.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.448 "dma_device_type": 2 00:07:58.448 } 00:07:58.448 ], 00:07:58.448 "driver_specific": {} 00:07:58.448 } 00:07:58.448 ] 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 BaseBdev3 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:07:58.448 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.449 [ 00:07:58.449 { 00:07:58.449 "name": "BaseBdev3", 00:07:58.449 "aliases": [ 00:07:58.449 "d23ab06f-8b10-4258-8434-616dd29733cc" 00:07:58.449 ], 00:07:58.449 "product_name": "Malloc disk", 00:07:58.449 "block_size": 512, 00:07:58.449 "num_blocks": 65536, 00:07:58.449 "uuid": "d23ab06f-8b10-4258-8434-616dd29733cc", 00:07:58.449 "assigned_rate_limits": { 00:07:58.449 "rw_ios_per_sec": 0, 00:07:58.449 "rw_mbytes_per_sec": 0, 00:07:58.449 "r_mbytes_per_sec": 0, 00:07:58.449 "w_mbytes_per_sec": 0 00:07:58.449 }, 00:07:58.449 "claimed": false, 00:07:58.449 "zoned": false, 00:07:58.449 "supported_io_types": { 00:07:58.449 "read": true, 00:07:58.449 "write": true, 00:07:58.449 "unmap": true, 00:07:58.449 "flush": true, 00:07:58.449 "reset": true, 00:07:58.449 "nvme_admin": false, 00:07:58.449 "nvme_io": false, 00:07:58.449 "nvme_io_md": false, 00:07:58.449 "write_zeroes": true, 00:07:58.449 "zcopy": true, 00:07:58.449 "get_zone_info": false, 00:07:58.449 "zone_management": false, 00:07:58.449 "zone_append": false, 00:07:58.449 "compare": false, 00:07:58.449 "compare_and_write": false, 00:07:58.449 "abort": true, 00:07:58.449 "seek_hole": false, 00:07:58.449 "seek_data": false, 00:07:58.449 "copy": true, 00:07:58.449 "nvme_iov_md": false 00:07:58.449 }, 00:07:58.449 "memory_domains": [ 00:07:58.449 { 00:07:58.449 "dma_device_id": "system", 00:07:58.449 "dma_device_type": 1 00:07:58.449 }, 00:07:58.449 { 00:07:58.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.449 "dma_device_type": 2 00:07:58.449 } 00:07:58.449 ], 00:07:58.449 "driver_specific": {} 00:07:58.449 } 00:07:58.449 ] 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.449 [2024-11-27 21:40:21.460014] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:58.449 [2024-11-27 21:40:21.460093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:58.449 [2024-11-27 21:40:21.460146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:58.449 [2024-11-27 21:40:21.461950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.449 "name": "Existed_Raid", 00:07:58.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.449 "strip_size_kb": 64, 00:07:58.449 "state": "configuring", 00:07:58.449 "raid_level": "concat", 00:07:58.449 "superblock": false, 00:07:58.449 "num_base_bdevs": 3, 00:07:58.449 "num_base_bdevs_discovered": 2, 00:07:58.449 "num_base_bdevs_operational": 3, 00:07:58.449 "base_bdevs_list": [ 00:07:58.449 { 00:07:58.449 "name": "BaseBdev1", 00:07:58.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.449 "is_configured": false, 00:07:58.449 "data_offset": 0, 00:07:58.449 "data_size": 0 00:07:58.449 }, 00:07:58.449 { 00:07:58.449 "name": "BaseBdev2", 00:07:58.449 "uuid": "9945b452-0e66-4c57-9d0c-ea613434f97c", 00:07:58.449 "is_configured": true, 00:07:58.449 "data_offset": 0, 00:07:58.449 "data_size": 65536 00:07:58.449 }, 00:07:58.449 { 00:07:58.449 "name": "BaseBdev3", 00:07:58.449 "uuid": "d23ab06f-8b10-4258-8434-616dd29733cc", 00:07:58.449 "is_configured": true, 00:07:58.449 "data_offset": 0, 00:07:58.449 "data_size": 65536 00:07:58.449 } 00:07:58.449 ] 00:07:58.449 }' 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.449 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.019 [2024-11-27 21:40:21.903265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.019 "name": "Existed_Raid", 00:07:59.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.019 "strip_size_kb": 64, 00:07:59.019 "state": "configuring", 00:07:59.019 "raid_level": "concat", 00:07:59.019 "superblock": false, 00:07:59.019 "num_base_bdevs": 3, 00:07:59.019 "num_base_bdevs_discovered": 1, 00:07:59.019 "num_base_bdevs_operational": 3, 00:07:59.019 "base_bdevs_list": [ 00:07:59.019 { 00:07:59.019 "name": "BaseBdev1", 00:07:59.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.019 "is_configured": false, 00:07:59.019 "data_offset": 0, 00:07:59.019 "data_size": 0 00:07:59.019 }, 00:07:59.019 { 00:07:59.019 "name": null, 00:07:59.019 "uuid": "9945b452-0e66-4c57-9d0c-ea613434f97c", 00:07:59.019 "is_configured": false, 00:07:59.019 "data_offset": 0, 00:07:59.019 "data_size": 65536 00:07:59.019 }, 00:07:59.019 { 00:07:59.019 "name": "BaseBdev3", 00:07:59.019 "uuid": "d23ab06f-8b10-4258-8434-616dd29733cc", 00:07:59.019 "is_configured": true, 00:07:59.019 "data_offset": 0, 00:07:59.019 "data_size": 65536 00:07:59.019 } 00:07:59.019 ] 00:07:59.019 }' 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.019 21:40:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.280 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.280 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:07:59.280 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.280 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.280 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.540 [2024-11-27 21:40:22.421242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.540 BaseBdev1 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.540 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.541 [ 00:07:59.541 { 00:07:59.541 "name": "BaseBdev1", 00:07:59.541 "aliases": [ 00:07:59.541 "7c58a9c3-a729-4590-94f0-78fe682b3913" 00:07:59.541 ], 00:07:59.541 "product_name": "Malloc disk", 00:07:59.541 "block_size": 512, 00:07:59.541 "num_blocks": 65536, 00:07:59.541 "uuid": "7c58a9c3-a729-4590-94f0-78fe682b3913", 00:07:59.541 "assigned_rate_limits": { 00:07:59.541 "rw_ios_per_sec": 0, 00:07:59.541 "rw_mbytes_per_sec": 0, 00:07:59.541 "r_mbytes_per_sec": 0, 00:07:59.541 "w_mbytes_per_sec": 0 00:07:59.541 }, 00:07:59.541 "claimed": true, 00:07:59.541 "claim_type": "exclusive_write", 00:07:59.541 "zoned": false, 00:07:59.541 "supported_io_types": { 00:07:59.541 "read": true, 00:07:59.541 "write": true, 00:07:59.541 "unmap": true, 00:07:59.541 "flush": true, 00:07:59.541 "reset": true, 00:07:59.541 "nvme_admin": false, 00:07:59.541 "nvme_io": false, 00:07:59.541 "nvme_io_md": false, 00:07:59.541 "write_zeroes": true, 00:07:59.541 "zcopy": true, 00:07:59.541 "get_zone_info": false, 00:07:59.541 "zone_management": false, 00:07:59.541 "zone_append": false, 00:07:59.541 "compare": false, 00:07:59.541 "compare_and_write": false, 00:07:59.541 "abort": true, 00:07:59.541 "seek_hole": false, 00:07:59.541 "seek_data": false, 00:07:59.541 "copy": true, 00:07:59.541 "nvme_iov_md": false 00:07:59.541 }, 00:07:59.541 "memory_domains": [ 00:07:59.541 { 00:07:59.541 "dma_device_id": "system", 00:07:59.541 "dma_device_type": 1 00:07:59.541 }, 00:07:59.541 { 00:07:59.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.541 "dma_device_type": 2 00:07:59.541 } 00:07:59.541 ], 00:07:59.541 "driver_specific": {} 00:07:59.541 } 00:07:59.541 ] 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.541 "name": "Existed_Raid", 00:07:59.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.541 "strip_size_kb": 64, 00:07:59.541 "state": "configuring", 00:07:59.541 "raid_level": "concat", 00:07:59.541 "superblock": false, 00:07:59.541 "num_base_bdevs": 3, 00:07:59.541 "num_base_bdevs_discovered": 2, 00:07:59.541 "num_base_bdevs_operational": 3, 00:07:59.541 "base_bdevs_list": [ 00:07:59.541 { 00:07:59.541 "name": "BaseBdev1", 00:07:59.541 "uuid": "7c58a9c3-a729-4590-94f0-78fe682b3913", 00:07:59.541 "is_configured": true, 00:07:59.541 "data_offset": 0, 00:07:59.541 "data_size": 65536 00:07:59.541 }, 00:07:59.541 { 00:07:59.541 "name": null, 00:07:59.541 "uuid": "9945b452-0e66-4c57-9d0c-ea613434f97c", 00:07:59.541 "is_configured": false, 00:07:59.541 "data_offset": 0, 00:07:59.541 "data_size": 65536 00:07:59.541 }, 00:07:59.541 { 00:07:59.541 "name": "BaseBdev3", 00:07:59.541 "uuid": "d23ab06f-8b10-4258-8434-616dd29733cc", 00:07:59.541 "is_configured": true, 00:07:59.541 "data_offset": 0, 00:07:59.541 "data_size": 65536 00:07:59.541 } 00:07:59.541 ] 00:07:59.541 }' 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.541 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.802 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:07:59.802 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.802 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.802 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.062 [2024-11-27 21:40:22.964374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.062 21:40:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.062 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.062 "name": "Existed_Raid", 00:08:00.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.062 "strip_size_kb": 64, 00:08:00.062 "state": "configuring", 00:08:00.062 "raid_level": "concat", 00:08:00.062 "superblock": false, 00:08:00.062 "num_base_bdevs": 3, 00:08:00.062 "num_base_bdevs_discovered": 1, 00:08:00.062 "num_base_bdevs_operational": 3, 00:08:00.062 "base_bdevs_list": [ 00:08:00.062 { 00:08:00.062 "name": "BaseBdev1", 00:08:00.062 "uuid": "7c58a9c3-a729-4590-94f0-78fe682b3913", 00:08:00.062 "is_configured": true, 00:08:00.062 "data_offset": 0, 00:08:00.062 "data_size": 65536 00:08:00.062 }, 00:08:00.062 { 00:08:00.062 "name": null, 00:08:00.062 "uuid": "9945b452-0e66-4c57-9d0c-ea613434f97c", 00:08:00.062 "is_configured": false, 00:08:00.062 "data_offset": 0, 00:08:00.062 "data_size": 65536 00:08:00.062 }, 00:08:00.062 { 00:08:00.062 "name": null, 00:08:00.062 "uuid": "d23ab06f-8b10-4258-8434-616dd29733cc", 00:08:00.062 "is_configured": false, 00:08:00.062 "data_offset": 0, 00:08:00.062 "data_size": 65536 00:08:00.062 } 00:08:00.062 ] 00:08:00.062 }' 00:08:00.062 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.062 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.322 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.322 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:00.322 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.322 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.322 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.583 [2024-11-27 21:40:23.467590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.583 "name": "Existed_Raid", 00:08:00.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.583 "strip_size_kb": 64, 00:08:00.583 "state": "configuring", 00:08:00.583 "raid_level": "concat", 00:08:00.583 "superblock": false, 00:08:00.583 "num_base_bdevs": 3, 00:08:00.583 "num_base_bdevs_discovered": 2, 00:08:00.583 "num_base_bdevs_operational": 3, 00:08:00.583 "base_bdevs_list": [ 00:08:00.583 { 00:08:00.583 "name": "BaseBdev1", 00:08:00.583 "uuid": "7c58a9c3-a729-4590-94f0-78fe682b3913", 00:08:00.583 "is_configured": true, 00:08:00.583 "data_offset": 0, 00:08:00.583 "data_size": 65536 00:08:00.583 }, 00:08:00.583 { 00:08:00.583 "name": null, 00:08:00.583 "uuid": "9945b452-0e66-4c57-9d0c-ea613434f97c", 00:08:00.583 "is_configured": false, 00:08:00.583 "data_offset": 0, 00:08:00.583 "data_size": 65536 00:08:00.583 }, 00:08:00.583 { 00:08:00.583 "name": "BaseBdev3", 00:08:00.583 "uuid": "d23ab06f-8b10-4258-8434-616dd29733cc", 00:08:00.583 "is_configured": true, 00:08:00.583 "data_offset": 0, 00:08:00.583 "data_size": 65536 00:08:00.583 } 00:08:00.583 ] 00:08:00.583 }' 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.583 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.842 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.842 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.842 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.843 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:00.843 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.103 [2024-11-27 21:40:23.982725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.103 21:40:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.103 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.103 "name": "Existed_Raid", 00:08:01.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.103 "strip_size_kb": 64, 00:08:01.103 "state": "configuring", 00:08:01.103 "raid_level": "concat", 00:08:01.103 "superblock": false, 00:08:01.103 "num_base_bdevs": 3, 00:08:01.103 "num_base_bdevs_discovered": 1, 00:08:01.103 "num_base_bdevs_operational": 3, 00:08:01.103 "base_bdevs_list": [ 00:08:01.103 { 00:08:01.103 "name": null, 00:08:01.103 "uuid": "7c58a9c3-a729-4590-94f0-78fe682b3913", 00:08:01.103 "is_configured": false, 00:08:01.103 "data_offset": 0, 00:08:01.103 "data_size": 65536 00:08:01.103 }, 00:08:01.103 { 00:08:01.103 "name": null, 00:08:01.103 "uuid": "9945b452-0e66-4c57-9d0c-ea613434f97c", 00:08:01.103 "is_configured": false, 00:08:01.103 "data_offset": 0, 00:08:01.103 "data_size": 65536 00:08:01.103 }, 00:08:01.103 { 00:08:01.103 "name": "BaseBdev3", 00:08:01.103 "uuid": "d23ab06f-8b10-4258-8434-616dd29733cc", 00:08:01.103 "is_configured": true, 00:08:01.103 "data_offset": 0, 00:08:01.103 "data_size": 65536 00:08:01.103 } 00:08:01.103 ] 00:08:01.103 }' 00:08:01.103 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.103 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.363 [2024-11-27 21:40:24.468258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.363 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.623 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.624 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.624 "name": "Existed_Raid", 00:08:01.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.624 "strip_size_kb": 64, 00:08:01.624 "state": "configuring", 00:08:01.624 "raid_level": "concat", 00:08:01.624 "superblock": false, 00:08:01.624 "num_base_bdevs": 3, 00:08:01.624 "num_base_bdevs_discovered": 2, 00:08:01.624 "num_base_bdevs_operational": 3, 00:08:01.624 "base_bdevs_list": [ 00:08:01.624 { 00:08:01.624 "name": null, 00:08:01.624 "uuid": "7c58a9c3-a729-4590-94f0-78fe682b3913", 00:08:01.624 "is_configured": false, 00:08:01.624 "data_offset": 0, 00:08:01.624 "data_size": 65536 00:08:01.624 }, 00:08:01.624 { 00:08:01.624 "name": "BaseBdev2", 00:08:01.624 "uuid": "9945b452-0e66-4c57-9d0c-ea613434f97c", 00:08:01.624 "is_configured": true, 00:08:01.624 "data_offset": 0, 00:08:01.624 "data_size": 65536 00:08:01.624 }, 00:08:01.624 { 00:08:01.624 "name": "BaseBdev3", 00:08:01.624 "uuid": "d23ab06f-8b10-4258-8434-616dd29733cc", 00:08:01.624 "is_configured": true, 00:08:01.624 "data_offset": 0, 00:08:01.624 "data_size": 65536 00:08:01.624 } 00:08:01.624 ] 00:08:01.624 }' 00:08:01.624 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.624 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7c58a9c3-a729-4590-94f0-78fe682b3913 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.883 NewBaseBdev 00:08:01.883 [2024-11-27 21:40:24.974424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:01.883 [2024-11-27 21:40:24.974461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:01.883 [2024-11-27 21:40:24.974470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:01.883 [2024-11-27 21:40:24.974709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:01.883 [2024-11-27 21:40:24.974841] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:01.883 [2024-11-27 21:40:24.974851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:01.883 [2024-11-27 21:40:24.975023] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:01.883 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.884 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:01.884 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.884 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.884 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.884 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.884 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.884 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.884 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:01.884 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.884 21:40:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.143 [ 00:08:02.143 { 00:08:02.143 "name": "NewBaseBdev", 00:08:02.143 "aliases": [ 00:08:02.143 "7c58a9c3-a729-4590-94f0-78fe682b3913" 00:08:02.143 ], 00:08:02.143 "product_name": "Malloc disk", 00:08:02.143 "block_size": 512, 00:08:02.143 "num_blocks": 65536, 00:08:02.143 "uuid": "7c58a9c3-a729-4590-94f0-78fe682b3913", 00:08:02.143 "assigned_rate_limits": { 00:08:02.143 "rw_ios_per_sec": 0, 00:08:02.143 "rw_mbytes_per_sec": 0, 00:08:02.143 "r_mbytes_per_sec": 0, 00:08:02.143 "w_mbytes_per_sec": 0 00:08:02.143 }, 00:08:02.143 "claimed": true, 00:08:02.143 "claim_type": "exclusive_write", 00:08:02.143 "zoned": false, 00:08:02.143 "supported_io_types": { 00:08:02.143 "read": true, 00:08:02.143 "write": true, 00:08:02.143 "unmap": true, 00:08:02.143 "flush": true, 00:08:02.143 "reset": true, 00:08:02.143 "nvme_admin": false, 00:08:02.143 "nvme_io": false, 00:08:02.143 "nvme_io_md": false, 00:08:02.143 "write_zeroes": true, 00:08:02.143 "zcopy": true, 00:08:02.143 "get_zone_info": false, 00:08:02.143 "zone_management": false, 00:08:02.143 "zone_append": false, 00:08:02.143 "compare": false, 00:08:02.143 "compare_and_write": false, 00:08:02.143 "abort": true, 00:08:02.143 "seek_hole": false, 00:08:02.143 "seek_data": false, 00:08:02.143 "copy": true, 00:08:02.143 "nvme_iov_md": false 00:08:02.143 }, 00:08:02.143 "memory_domains": [ 00:08:02.143 { 00:08:02.143 "dma_device_id": "system", 00:08:02.143 "dma_device_type": 1 00:08:02.143 }, 00:08:02.143 { 00:08:02.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.143 "dma_device_type": 2 00:08:02.143 } 00:08:02.143 ], 00:08:02.143 "driver_specific": {} 00:08:02.143 } 00:08:02.143 ] 00:08:02.143 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.143 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:02.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:02.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.143 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.144 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.144 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.144 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.144 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.144 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.144 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.144 "name": "Existed_Raid", 00:08:02.144 "uuid": "6cdbe0d1-4024-4fc9-97e5-48e4f4b2ea8e", 00:08:02.144 "strip_size_kb": 64, 00:08:02.144 "state": "online", 00:08:02.144 "raid_level": "concat", 00:08:02.144 "superblock": false, 00:08:02.144 "num_base_bdevs": 3, 00:08:02.144 "num_base_bdevs_discovered": 3, 00:08:02.144 "num_base_bdevs_operational": 3, 00:08:02.144 "base_bdevs_list": [ 00:08:02.144 { 00:08:02.144 "name": "NewBaseBdev", 00:08:02.144 "uuid": "7c58a9c3-a729-4590-94f0-78fe682b3913", 00:08:02.144 "is_configured": true, 00:08:02.144 "data_offset": 0, 00:08:02.144 "data_size": 65536 00:08:02.144 }, 00:08:02.144 { 00:08:02.144 "name": "BaseBdev2", 00:08:02.144 "uuid": "9945b452-0e66-4c57-9d0c-ea613434f97c", 00:08:02.144 "is_configured": true, 00:08:02.144 "data_offset": 0, 00:08:02.144 "data_size": 65536 00:08:02.144 }, 00:08:02.144 { 00:08:02.144 "name": "BaseBdev3", 00:08:02.144 "uuid": "d23ab06f-8b10-4258-8434-616dd29733cc", 00:08:02.144 "is_configured": true, 00:08:02.144 "data_offset": 0, 00:08:02.144 "data_size": 65536 00:08:02.144 } 00:08:02.144 ] 00:08:02.144 }' 00:08:02.144 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.144 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.404 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:02.404 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:02.404 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.404 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.404 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.404 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.404 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:02.404 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.404 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.404 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.404 [2024-11-27 21:40:25.493892] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.404 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.663 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.663 "name": "Existed_Raid", 00:08:02.663 "aliases": [ 00:08:02.663 "6cdbe0d1-4024-4fc9-97e5-48e4f4b2ea8e" 00:08:02.663 ], 00:08:02.663 "product_name": "Raid Volume", 00:08:02.663 "block_size": 512, 00:08:02.663 "num_blocks": 196608, 00:08:02.663 "uuid": "6cdbe0d1-4024-4fc9-97e5-48e4f4b2ea8e", 00:08:02.663 "assigned_rate_limits": { 00:08:02.663 "rw_ios_per_sec": 0, 00:08:02.663 "rw_mbytes_per_sec": 0, 00:08:02.663 "r_mbytes_per_sec": 0, 00:08:02.663 "w_mbytes_per_sec": 0 00:08:02.663 }, 00:08:02.663 "claimed": false, 00:08:02.663 "zoned": false, 00:08:02.663 "supported_io_types": { 00:08:02.663 "read": true, 00:08:02.663 "write": true, 00:08:02.663 "unmap": true, 00:08:02.663 "flush": true, 00:08:02.663 "reset": true, 00:08:02.663 "nvme_admin": false, 00:08:02.663 "nvme_io": false, 00:08:02.663 "nvme_io_md": false, 00:08:02.663 "write_zeroes": true, 00:08:02.663 "zcopy": false, 00:08:02.663 "get_zone_info": false, 00:08:02.663 "zone_management": false, 00:08:02.663 "zone_append": false, 00:08:02.663 "compare": false, 00:08:02.663 "compare_and_write": false, 00:08:02.663 "abort": false, 00:08:02.663 "seek_hole": false, 00:08:02.663 "seek_data": false, 00:08:02.663 "copy": false, 00:08:02.663 "nvme_iov_md": false 00:08:02.663 }, 00:08:02.663 "memory_domains": [ 00:08:02.663 { 00:08:02.663 "dma_device_id": "system", 00:08:02.663 "dma_device_type": 1 00:08:02.663 }, 00:08:02.663 { 00:08:02.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.663 "dma_device_type": 2 00:08:02.663 }, 00:08:02.663 { 00:08:02.663 "dma_device_id": "system", 00:08:02.663 "dma_device_type": 1 00:08:02.663 }, 00:08:02.663 { 00:08:02.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.663 "dma_device_type": 2 00:08:02.663 }, 00:08:02.663 { 00:08:02.663 "dma_device_id": "system", 00:08:02.663 "dma_device_type": 1 00:08:02.663 }, 00:08:02.663 { 00:08:02.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.663 "dma_device_type": 2 00:08:02.663 } 00:08:02.663 ], 00:08:02.663 "driver_specific": { 00:08:02.663 "raid": { 00:08:02.663 "uuid": "6cdbe0d1-4024-4fc9-97e5-48e4f4b2ea8e", 00:08:02.663 "strip_size_kb": 64, 00:08:02.663 "state": "online", 00:08:02.663 "raid_level": "concat", 00:08:02.663 "superblock": false, 00:08:02.663 "num_base_bdevs": 3, 00:08:02.663 "num_base_bdevs_discovered": 3, 00:08:02.663 "num_base_bdevs_operational": 3, 00:08:02.663 "base_bdevs_list": [ 00:08:02.663 { 00:08:02.663 "name": "NewBaseBdev", 00:08:02.663 "uuid": "7c58a9c3-a729-4590-94f0-78fe682b3913", 00:08:02.663 "is_configured": true, 00:08:02.663 "data_offset": 0, 00:08:02.663 "data_size": 65536 00:08:02.663 }, 00:08:02.663 { 00:08:02.663 "name": "BaseBdev2", 00:08:02.663 "uuid": "9945b452-0e66-4c57-9d0c-ea613434f97c", 00:08:02.663 "is_configured": true, 00:08:02.663 "data_offset": 0, 00:08:02.663 "data_size": 65536 00:08:02.663 }, 00:08:02.663 { 00:08:02.663 "name": "BaseBdev3", 00:08:02.663 "uuid": "d23ab06f-8b10-4258-8434-616dd29733cc", 00:08:02.663 "is_configured": true, 00:08:02.663 "data_offset": 0, 00:08:02.663 "data_size": 65536 00:08:02.663 } 00:08:02.663 ] 00:08:02.663 } 00:08:02.663 } 00:08:02.663 }' 00:08:02.663 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.663 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:02.663 BaseBdev2 00:08:02.663 BaseBdev3' 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.664 [2024-11-27 21:40:25.741193] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:02.664 [2024-11-27 21:40:25.741258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.664 [2024-11-27 21:40:25.741345] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.664 [2024-11-27 21:40:25.741402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.664 [2024-11-27 21:40:25.741423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 76485 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 76485 ']' 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 76485 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76485 00:08:02.664 killing process with pid 76485 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76485' 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 76485 00:08:02.664 [2024-11-27 21:40:25.776374] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.664 21:40:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 76485 00:08:02.925 [2024-11-27 21:40:25.805863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:02.925 21:40:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:02.925 00:08:02.925 real 0m8.870s 00:08:02.925 user 0m15.237s 00:08:02.925 sys 0m1.737s 00:08:02.925 ************************************ 00:08:02.925 END TEST raid_state_function_test 00:08:02.925 ************************************ 00:08:02.925 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.925 21:40:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.184 21:40:26 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:03.184 21:40:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:03.184 21:40:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.184 21:40:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.184 ************************************ 00:08:03.184 START TEST raid_state_function_test_sb 00:08:03.184 ************************************ 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=77084 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:03.184 Process raid pid: 77084 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 77084' 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 77084 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77084 ']' 00:08:03.184 21:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.185 21:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.185 21:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.185 21:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.185 21:40:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.185 [2024-11-27 21:40:26.184948] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:03.185 [2024-11-27 21:40:26.185163] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.443 [2024-11-27 21:40:26.339751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.444 [2024-11-27 21:40:26.365030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.444 [2024-11-27 21:40:26.406353] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.444 [2024-11-27 21:40:26.406464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.012 [2024-11-27 21:40:27.008432] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.012 [2024-11-27 21:40:27.008528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.012 [2024-11-27 21:40:27.008570] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.012 [2024-11-27 21:40:27.008611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.012 [2024-11-27 21:40:27.008654] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:04.012 [2024-11-27 21:40:27.008681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.012 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.013 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.013 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.013 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.013 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.013 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.013 "name": "Existed_Raid", 00:08:04.013 "uuid": "920179b5-b28e-4965-acc4-718bc098394f", 00:08:04.013 "strip_size_kb": 64, 00:08:04.013 "state": "configuring", 00:08:04.013 "raid_level": "concat", 00:08:04.013 "superblock": true, 00:08:04.013 "num_base_bdevs": 3, 00:08:04.013 "num_base_bdevs_discovered": 0, 00:08:04.013 "num_base_bdevs_operational": 3, 00:08:04.013 "base_bdevs_list": [ 00:08:04.013 { 00:08:04.013 "name": "BaseBdev1", 00:08:04.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.013 "is_configured": false, 00:08:04.013 "data_offset": 0, 00:08:04.013 "data_size": 0 00:08:04.013 }, 00:08:04.013 { 00:08:04.013 "name": "BaseBdev2", 00:08:04.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.013 "is_configured": false, 00:08:04.013 "data_offset": 0, 00:08:04.013 "data_size": 0 00:08:04.013 }, 00:08:04.013 { 00:08:04.013 "name": "BaseBdev3", 00:08:04.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.013 "is_configured": false, 00:08:04.013 "data_offset": 0, 00:08:04.013 "data_size": 0 00:08:04.013 } 00:08:04.013 ] 00:08:04.013 }' 00:08:04.013 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.013 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.582 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.582 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.582 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.582 [2024-11-27 21:40:27.403647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.582 [2024-11-27 21:40:27.403732] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:04.582 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.582 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:04.582 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.582 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.582 [2024-11-27 21:40:27.411668] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.582 [2024-11-27 21:40:27.411711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.582 [2024-11-27 21:40:27.411719] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.582 [2024-11-27 21:40:27.411728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.582 [2024-11-27 21:40:27.411734] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:04.582 [2024-11-27 21:40:27.411742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:04.582 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.583 [2024-11-27 21:40:27.428355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.583 BaseBdev1 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.583 [ 00:08:04.583 { 00:08:04.583 "name": "BaseBdev1", 00:08:04.583 "aliases": [ 00:08:04.583 "ae38b66a-709d-460c-8ed0-c037ba7f674c" 00:08:04.583 ], 00:08:04.583 "product_name": "Malloc disk", 00:08:04.583 "block_size": 512, 00:08:04.583 "num_blocks": 65536, 00:08:04.583 "uuid": "ae38b66a-709d-460c-8ed0-c037ba7f674c", 00:08:04.583 "assigned_rate_limits": { 00:08:04.583 "rw_ios_per_sec": 0, 00:08:04.583 "rw_mbytes_per_sec": 0, 00:08:04.583 "r_mbytes_per_sec": 0, 00:08:04.583 "w_mbytes_per_sec": 0 00:08:04.583 }, 00:08:04.583 "claimed": true, 00:08:04.583 "claim_type": "exclusive_write", 00:08:04.583 "zoned": false, 00:08:04.583 "supported_io_types": { 00:08:04.583 "read": true, 00:08:04.583 "write": true, 00:08:04.583 "unmap": true, 00:08:04.583 "flush": true, 00:08:04.583 "reset": true, 00:08:04.583 "nvme_admin": false, 00:08:04.583 "nvme_io": false, 00:08:04.583 "nvme_io_md": false, 00:08:04.583 "write_zeroes": true, 00:08:04.583 "zcopy": true, 00:08:04.583 "get_zone_info": false, 00:08:04.583 "zone_management": false, 00:08:04.583 "zone_append": false, 00:08:04.583 "compare": false, 00:08:04.583 "compare_and_write": false, 00:08:04.583 "abort": true, 00:08:04.583 "seek_hole": false, 00:08:04.583 "seek_data": false, 00:08:04.583 "copy": true, 00:08:04.583 "nvme_iov_md": false 00:08:04.583 }, 00:08:04.583 "memory_domains": [ 00:08:04.583 { 00:08:04.583 "dma_device_id": "system", 00:08:04.583 "dma_device_type": 1 00:08:04.583 }, 00:08:04.583 { 00:08:04.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.583 "dma_device_type": 2 00:08:04.583 } 00:08:04.583 ], 00:08:04.583 "driver_specific": {} 00:08:04.583 } 00:08:04.583 ] 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.583 "name": "Existed_Raid", 00:08:04.583 "uuid": "40bef99c-ab03-46b0-9f08-e0e8de2f24a8", 00:08:04.583 "strip_size_kb": 64, 00:08:04.583 "state": "configuring", 00:08:04.583 "raid_level": "concat", 00:08:04.583 "superblock": true, 00:08:04.583 "num_base_bdevs": 3, 00:08:04.583 "num_base_bdevs_discovered": 1, 00:08:04.583 "num_base_bdevs_operational": 3, 00:08:04.583 "base_bdevs_list": [ 00:08:04.583 { 00:08:04.583 "name": "BaseBdev1", 00:08:04.583 "uuid": "ae38b66a-709d-460c-8ed0-c037ba7f674c", 00:08:04.583 "is_configured": true, 00:08:04.583 "data_offset": 2048, 00:08:04.583 "data_size": 63488 00:08:04.583 }, 00:08:04.583 { 00:08:04.583 "name": "BaseBdev2", 00:08:04.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.583 "is_configured": false, 00:08:04.583 "data_offset": 0, 00:08:04.583 "data_size": 0 00:08:04.583 }, 00:08:04.583 { 00:08:04.583 "name": "BaseBdev3", 00:08:04.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.583 "is_configured": false, 00:08:04.583 "data_offset": 0, 00:08:04.583 "data_size": 0 00:08:04.583 } 00:08:04.583 ] 00:08:04.583 }' 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.583 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.851 [2024-11-27 21:40:27.887603] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.851 [2024-11-27 21:40:27.887648] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.851 [2024-11-27 21:40:27.895633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.851 [2024-11-27 21:40:27.897488] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.851 [2024-11-27 21:40:27.897562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.851 [2024-11-27 21:40:27.897603] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:04.851 [2024-11-27 21:40:27.897643] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.851 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.851 "name": "Existed_Raid", 00:08:04.851 "uuid": "be8c0354-71ff-4bf6-a32c-46e56b1025d0", 00:08:04.851 "strip_size_kb": 64, 00:08:04.851 "state": "configuring", 00:08:04.851 "raid_level": "concat", 00:08:04.851 "superblock": true, 00:08:04.851 "num_base_bdevs": 3, 00:08:04.851 "num_base_bdevs_discovered": 1, 00:08:04.851 "num_base_bdevs_operational": 3, 00:08:04.851 "base_bdevs_list": [ 00:08:04.851 { 00:08:04.851 "name": "BaseBdev1", 00:08:04.851 "uuid": "ae38b66a-709d-460c-8ed0-c037ba7f674c", 00:08:04.851 "is_configured": true, 00:08:04.851 "data_offset": 2048, 00:08:04.851 "data_size": 63488 00:08:04.851 }, 00:08:04.851 { 00:08:04.851 "name": "BaseBdev2", 00:08:04.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.851 "is_configured": false, 00:08:04.851 "data_offset": 0, 00:08:04.851 "data_size": 0 00:08:04.851 }, 00:08:04.851 { 00:08:04.851 "name": "BaseBdev3", 00:08:04.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.852 "is_configured": false, 00:08:04.852 "data_offset": 0, 00:08:04.852 "data_size": 0 00:08:04.852 } 00:08:04.852 ] 00:08:04.852 }' 00:08:04.852 21:40:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.852 21:40:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.436 [2024-11-27 21:40:28.373712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.436 BaseBdev2 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.436 [ 00:08:05.436 { 00:08:05.436 "name": "BaseBdev2", 00:08:05.436 "aliases": [ 00:08:05.436 "a7012e32-5278-4b1e-a5dd-99577065293d" 00:08:05.436 ], 00:08:05.436 "product_name": "Malloc disk", 00:08:05.436 "block_size": 512, 00:08:05.436 "num_blocks": 65536, 00:08:05.436 "uuid": "a7012e32-5278-4b1e-a5dd-99577065293d", 00:08:05.436 "assigned_rate_limits": { 00:08:05.436 "rw_ios_per_sec": 0, 00:08:05.436 "rw_mbytes_per_sec": 0, 00:08:05.436 "r_mbytes_per_sec": 0, 00:08:05.436 "w_mbytes_per_sec": 0 00:08:05.436 }, 00:08:05.436 "claimed": true, 00:08:05.436 "claim_type": "exclusive_write", 00:08:05.436 "zoned": false, 00:08:05.436 "supported_io_types": { 00:08:05.436 "read": true, 00:08:05.436 "write": true, 00:08:05.436 "unmap": true, 00:08:05.436 "flush": true, 00:08:05.436 "reset": true, 00:08:05.436 "nvme_admin": false, 00:08:05.436 "nvme_io": false, 00:08:05.436 "nvme_io_md": false, 00:08:05.436 "write_zeroes": true, 00:08:05.436 "zcopy": true, 00:08:05.436 "get_zone_info": false, 00:08:05.436 "zone_management": false, 00:08:05.436 "zone_append": false, 00:08:05.436 "compare": false, 00:08:05.436 "compare_and_write": false, 00:08:05.436 "abort": true, 00:08:05.436 "seek_hole": false, 00:08:05.436 "seek_data": false, 00:08:05.436 "copy": true, 00:08:05.436 "nvme_iov_md": false 00:08:05.436 }, 00:08:05.436 "memory_domains": [ 00:08:05.436 { 00:08:05.436 "dma_device_id": "system", 00:08:05.436 "dma_device_type": 1 00:08:05.436 }, 00:08:05.436 { 00:08:05.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.436 "dma_device_type": 2 00:08:05.436 } 00:08:05.436 ], 00:08:05.436 "driver_specific": {} 00:08:05.436 } 00:08:05.436 ] 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.436 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.437 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.437 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.437 "name": "Existed_Raid", 00:08:05.437 "uuid": "be8c0354-71ff-4bf6-a32c-46e56b1025d0", 00:08:05.437 "strip_size_kb": 64, 00:08:05.437 "state": "configuring", 00:08:05.437 "raid_level": "concat", 00:08:05.437 "superblock": true, 00:08:05.437 "num_base_bdevs": 3, 00:08:05.437 "num_base_bdevs_discovered": 2, 00:08:05.437 "num_base_bdevs_operational": 3, 00:08:05.437 "base_bdevs_list": [ 00:08:05.437 { 00:08:05.437 "name": "BaseBdev1", 00:08:05.437 "uuid": "ae38b66a-709d-460c-8ed0-c037ba7f674c", 00:08:05.437 "is_configured": true, 00:08:05.437 "data_offset": 2048, 00:08:05.437 "data_size": 63488 00:08:05.437 }, 00:08:05.437 { 00:08:05.437 "name": "BaseBdev2", 00:08:05.437 "uuid": "a7012e32-5278-4b1e-a5dd-99577065293d", 00:08:05.437 "is_configured": true, 00:08:05.437 "data_offset": 2048, 00:08:05.437 "data_size": 63488 00:08:05.437 }, 00:08:05.437 { 00:08:05.437 "name": "BaseBdev3", 00:08:05.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.437 "is_configured": false, 00:08:05.437 "data_offset": 0, 00:08:05.437 "data_size": 0 00:08:05.437 } 00:08:05.437 ] 00:08:05.437 }' 00:08:05.437 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.437 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.697 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:05.697 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.697 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.957 [2024-11-27 21:40:28.829322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:05.957 [2024-11-27 21:40:28.829937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:05.957 [2024-11-27 21:40:28.830024] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:05.957 BaseBdev3 00:08:05.957 [2024-11-27 21:40:28.831032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.957 [2024-11-27 21:40:28.831493] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:05.957 [2024-11-27 21:40:28.831535] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:05.957 [2024-11-27 21:40:28.831930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.957 [ 00:08:05.957 { 00:08:05.957 "name": "BaseBdev3", 00:08:05.957 "aliases": [ 00:08:05.957 "4a214fa1-1484-4506-a44c-e055c0999448" 00:08:05.957 ], 00:08:05.957 "product_name": "Malloc disk", 00:08:05.957 "block_size": 512, 00:08:05.957 "num_blocks": 65536, 00:08:05.957 "uuid": "4a214fa1-1484-4506-a44c-e055c0999448", 00:08:05.957 "assigned_rate_limits": { 00:08:05.957 "rw_ios_per_sec": 0, 00:08:05.957 "rw_mbytes_per_sec": 0, 00:08:05.957 "r_mbytes_per_sec": 0, 00:08:05.957 "w_mbytes_per_sec": 0 00:08:05.957 }, 00:08:05.957 "claimed": true, 00:08:05.957 "claim_type": "exclusive_write", 00:08:05.957 "zoned": false, 00:08:05.957 "supported_io_types": { 00:08:05.957 "read": true, 00:08:05.957 "write": true, 00:08:05.957 "unmap": true, 00:08:05.957 "flush": true, 00:08:05.957 "reset": true, 00:08:05.957 "nvme_admin": false, 00:08:05.957 "nvme_io": false, 00:08:05.957 "nvme_io_md": false, 00:08:05.957 "write_zeroes": true, 00:08:05.957 "zcopy": true, 00:08:05.957 "get_zone_info": false, 00:08:05.957 "zone_management": false, 00:08:05.957 "zone_append": false, 00:08:05.957 "compare": false, 00:08:05.957 "compare_and_write": false, 00:08:05.957 "abort": true, 00:08:05.957 "seek_hole": false, 00:08:05.957 "seek_data": false, 00:08:05.957 "copy": true, 00:08:05.957 "nvme_iov_md": false 00:08:05.957 }, 00:08:05.957 "memory_domains": [ 00:08:05.957 { 00:08:05.957 "dma_device_id": "system", 00:08:05.957 "dma_device_type": 1 00:08:05.957 }, 00:08:05.957 { 00:08:05.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.957 "dma_device_type": 2 00:08:05.957 } 00:08:05.957 ], 00:08:05.957 "driver_specific": {} 00:08:05.957 } 00:08:05.957 ] 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.957 "name": "Existed_Raid", 00:08:05.957 "uuid": "be8c0354-71ff-4bf6-a32c-46e56b1025d0", 00:08:05.957 "strip_size_kb": 64, 00:08:05.957 "state": "online", 00:08:05.957 "raid_level": "concat", 00:08:05.957 "superblock": true, 00:08:05.957 "num_base_bdevs": 3, 00:08:05.957 "num_base_bdevs_discovered": 3, 00:08:05.957 "num_base_bdevs_operational": 3, 00:08:05.957 "base_bdevs_list": [ 00:08:05.957 { 00:08:05.957 "name": "BaseBdev1", 00:08:05.957 "uuid": "ae38b66a-709d-460c-8ed0-c037ba7f674c", 00:08:05.957 "is_configured": true, 00:08:05.957 "data_offset": 2048, 00:08:05.957 "data_size": 63488 00:08:05.957 }, 00:08:05.957 { 00:08:05.957 "name": "BaseBdev2", 00:08:05.957 "uuid": "a7012e32-5278-4b1e-a5dd-99577065293d", 00:08:05.957 "is_configured": true, 00:08:05.957 "data_offset": 2048, 00:08:05.957 "data_size": 63488 00:08:05.957 }, 00:08:05.957 { 00:08:05.957 "name": "BaseBdev3", 00:08:05.957 "uuid": "4a214fa1-1484-4506-a44c-e055c0999448", 00:08:05.957 "is_configured": true, 00:08:05.957 "data_offset": 2048, 00:08:05.957 "data_size": 63488 00:08:05.957 } 00:08:05.957 ] 00:08:05.957 }' 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.957 21:40:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.217 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:06.217 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:06.217 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:06.217 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:06.217 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:06.217 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:06.217 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:06.217 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:06.217 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.217 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.217 [2024-11-27 21:40:29.296718] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.217 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.217 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:06.217 "name": "Existed_Raid", 00:08:06.217 "aliases": [ 00:08:06.217 "be8c0354-71ff-4bf6-a32c-46e56b1025d0" 00:08:06.217 ], 00:08:06.217 "product_name": "Raid Volume", 00:08:06.217 "block_size": 512, 00:08:06.217 "num_blocks": 190464, 00:08:06.217 "uuid": "be8c0354-71ff-4bf6-a32c-46e56b1025d0", 00:08:06.217 "assigned_rate_limits": { 00:08:06.217 "rw_ios_per_sec": 0, 00:08:06.217 "rw_mbytes_per_sec": 0, 00:08:06.217 "r_mbytes_per_sec": 0, 00:08:06.217 "w_mbytes_per_sec": 0 00:08:06.217 }, 00:08:06.217 "claimed": false, 00:08:06.217 "zoned": false, 00:08:06.217 "supported_io_types": { 00:08:06.217 "read": true, 00:08:06.217 "write": true, 00:08:06.217 "unmap": true, 00:08:06.217 "flush": true, 00:08:06.217 "reset": true, 00:08:06.217 "nvme_admin": false, 00:08:06.217 "nvme_io": false, 00:08:06.217 "nvme_io_md": false, 00:08:06.217 "write_zeroes": true, 00:08:06.217 "zcopy": false, 00:08:06.217 "get_zone_info": false, 00:08:06.217 "zone_management": false, 00:08:06.217 "zone_append": false, 00:08:06.217 "compare": false, 00:08:06.217 "compare_and_write": false, 00:08:06.217 "abort": false, 00:08:06.217 "seek_hole": false, 00:08:06.217 "seek_data": false, 00:08:06.217 "copy": false, 00:08:06.217 "nvme_iov_md": false 00:08:06.217 }, 00:08:06.217 "memory_domains": [ 00:08:06.217 { 00:08:06.217 "dma_device_id": "system", 00:08:06.217 "dma_device_type": 1 00:08:06.217 }, 00:08:06.217 { 00:08:06.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.217 "dma_device_type": 2 00:08:06.217 }, 00:08:06.217 { 00:08:06.217 "dma_device_id": "system", 00:08:06.217 "dma_device_type": 1 00:08:06.217 }, 00:08:06.217 { 00:08:06.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.217 "dma_device_type": 2 00:08:06.217 }, 00:08:06.217 { 00:08:06.217 "dma_device_id": "system", 00:08:06.217 "dma_device_type": 1 00:08:06.217 }, 00:08:06.217 { 00:08:06.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.217 "dma_device_type": 2 00:08:06.217 } 00:08:06.217 ], 00:08:06.217 "driver_specific": { 00:08:06.217 "raid": { 00:08:06.217 "uuid": "be8c0354-71ff-4bf6-a32c-46e56b1025d0", 00:08:06.217 "strip_size_kb": 64, 00:08:06.217 "state": "online", 00:08:06.217 "raid_level": "concat", 00:08:06.217 "superblock": true, 00:08:06.217 "num_base_bdevs": 3, 00:08:06.217 "num_base_bdevs_discovered": 3, 00:08:06.217 "num_base_bdevs_operational": 3, 00:08:06.217 "base_bdevs_list": [ 00:08:06.217 { 00:08:06.217 "name": "BaseBdev1", 00:08:06.217 "uuid": "ae38b66a-709d-460c-8ed0-c037ba7f674c", 00:08:06.217 "is_configured": true, 00:08:06.217 "data_offset": 2048, 00:08:06.217 "data_size": 63488 00:08:06.217 }, 00:08:06.217 { 00:08:06.217 "name": "BaseBdev2", 00:08:06.217 "uuid": "a7012e32-5278-4b1e-a5dd-99577065293d", 00:08:06.217 "is_configured": true, 00:08:06.217 "data_offset": 2048, 00:08:06.217 "data_size": 63488 00:08:06.217 }, 00:08:06.217 { 00:08:06.217 "name": "BaseBdev3", 00:08:06.217 "uuid": "4a214fa1-1484-4506-a44c-e055c0999448", 00:08:06.217 "is_configured": true, 00:08:06.217 "data_offset": 2048, 00:08:06.217 "data_size": 63488 00:08:06.217 } 00:08:06.217 ] 00:08:06.217 } 00:08:06.217 } 00:08:06.217 }' 00:08:06.217 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:06.477 BaseBdev2 00:08:06.477 BaseBdev3' 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.477 [2024-11-27 21:40:29.560023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.477 [2024-11-27 21:40:29.560047] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.477 [2024-11-27 21:40:29.560098] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.477 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.737 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.737 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.737 "name": "Existed_Raid", 00:08:06.737 "uuid": "be8c0354-71ff-4bf6-a32c-46e56b1025d0", 00:08:06.737 "strip_size_kb": 64, 00:08:06.737 "state": "offline", 00:08:06.737 "raid_level": "concat", 00:08:06.737 "superblock": true, 00:08:06.737 "num_base_bdevs": 3, 00:08:06.737 "num_base_bdevs_discovered": 2, 00:08:06.737 "num_base_bdevs_operational": 2, 00:08:06.737 "base_bdevs_list": [ 00:08:06.737 { 00:08:06.737 "name": null, 00:08:06.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.737 "is_configured": false, 00:08:06.737 "data_offset": 0, 00:08:06.737 "data_size": 63488 00:08:06.737 }, 00:08:06.737 { 00:08:06.737 "name": "BaseBdev2", 00:08:06.737 "uuid": "a7012e32-5278-4b1e-a5dd-99577065293d", 00:08:06.737 "is_configured": true, 00:08:06.737 "data_offset": 2048, 00:08:06.737 "data_size": 63488 00:08:06.737 }, 00:08:06.737 { 00:08:06.737 "name": "BaseBdev3", 00:08:06.737 "uuid": "4a214fa1-1484-4506-a44c-e055c0999448", 00:08:06.737 "is_configured": true, 00:08:06.737 "data_offset": 2048, 00:08:06.737 "data_size": 63488 00:08:06.737 } 00:08:06.737 ] 00:08:06.737 }' 00:08:06.737 21:40:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.737 21:40:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.998 [2024-11-27 21:40:30.082229] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:06.998 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.258 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:07.258 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:07.258 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:07.258 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.258 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.258 [2024-11-27 21:40:30.149101] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:07.258 [2024-11-27 21:40:30.149186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:07.258 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.258 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:07.258 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.258 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.258 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:07.258 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.258 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.258 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.258 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.259 BaseBdev2 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.259 [ 00:08:07.259 { 00:08:07.259 "name": "BaseBdev2", 00:08:07.259 "aliases": [ 00:08:07.259 "4fae2618-d381-4759-a010-ef21e0e79c62" 00:08:07.259 ], 00:08:07.259 "product_name": "Malloc disk", 00:08:07.259 "block_size": 512, 00:08:07.259 "num_blocks": 65536, 00:08:07.259 "uuid": "4fae2618-d381-4759-a010-ef21e0e79c62", 00:08:07.259 "assigned_rate_limits": { 00:08:07.259 "rw_ios_per_sec": 0, 00:08:07.259 "rw_mbytes_per_sec": 0, 00:08:07.259 "r_mbytes_per_sec": 0, 00:08:07.259 "w_mbytes_per_sec": 0 00:08:07.259 }, 00:08:07.259 "claimed": false, 00:08:07.259 "zoned": false, 00:08:07.259 "supported_io_types": { 00:08:07.259 "read": true, 00:08:07.259 "write": true, 00:08:07.259 "unmap": true, 00:08:07.259 "flush": true, 00:08:07.259 "reset": true, 00:08:07.259 "nvme_admin": false, 00:08:07.259 "nvme_io": false, 00:08:07.259 "nvme_io_md": false, 00:08:07.259 "write_zeroes": true, 00:08:07.259 "zcopy": true, 00:08:07.259 "get_zone_info": false, 00:08:07.259 "zone_management": false, 00:08:07.259 "zone_append": false, 00:08:07.259 "compare": false, 00:08:07.259 "compare_and_write": false, 00:08:07.259 "abort": true, 00:08:07.259 "seek_hole": false, 00:08:07.259 "seek_data": false, 00:08:07.259 "copy": true, 00:08:07.259 "nvme_iov_md": false 00:08:07.259 }, 00:08:07.259 "memory_domains": [ 00:08:07.259 { 00:08:07.259 "dma_device_id": "system", 00:08:07.259 "dma_device_type": 1 00:08:07.259 }, 00:08:07.259 { 00:08:07.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.259 "dma_device_type": 2 00:08:07.259 } 00:08:07.259 ], 00:08:07.259 "driver_specific": {} 00:08:07.259 } 00:08:07.259 ] 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.259 BaseBdev3 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.259 [ 00:08:07.259 { 00:08:07.259 "name": "BaseBdev3", 00:08:07.259 "aliases": [ 00:08:07.259 "10912edf-03b8-4473-be11-86b95323a925" 00:08:07.259 ], 00:08:07.259 "product_name": "Malloc disk", 00:08:07.259 "block_size": 512, 00:08:07.259 "num_blocks": 65536, 00:08:07.259 "uuid": "10912edf-03b8-4473-be11-86b95323a925", 00:08:07.259 "assigned_rate_limits": { 00:08:07.259 "rw_ios_per_sec": 0, 00:08:07.259 "rw_mbytes_per_sec": 0, 00:08:07.259 "r_mbytes_per_sec": 0, 00:08:07.259 "w_mbytes_per_sec": 0 00:08:07.259 }, 00:08:07.259 "claimed": false, 00:08:07.259 "zoned": false, 00:08:07.259 "supported_io_types": { 00:08:07.259 "read": true, 00:08:07.259 "write": true, 00:08:07.259 "unmap": true, 00:08:07.259 "flush": true, 00:08:07.259 "reset": true, 00:08:07.259 "nvme_admin": false, 00:08:07.259 "nvme_io": false, 00:08:07.259 "nvme_io_md": false, 00:08:07.259 "write_zeroes": true, 00:08:07.259 "zcopy": true, 00:08:07.259 "get_zone_info": false, 00:08:07.259 "zone_management": false, 00:08:07.259 "zone_append": false, 00:08:07.259 "compare": false, 00:08:07.259 "compare_and_write": false, 00:08:07.259 "abort": true, 00:08:07.259 "seek_hole": false, 00:08:07.259 "seek_data": false, 00:08:07.259 "copy": true, 00:08:07.259 "nvme_iov_md": false 00:08:07.259 }, 00:08:07.259 "memory_domains": [ 00:08:07.259 { 00:08:07.259 "dma_device_id": "system", 00:08:07.259 "dma_device_type": 1 00:08:07.259 }, 00:08:07.259 { 00:08:07.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.259 "dma_device_type": 2 00:08:07.259 } 00:08:07.259 ], 00:08:07.259 "driver_specific": {} 00:08:07.259 } 00:08:07.259 ] 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.259 [2024-11-27 21:40:30.324050] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:07.259 [2024-11-27 21:40:30.324140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:07.259 [2024-11-27 21:40:30.324196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.259 [2024-11-27 21:40:30.326042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.259 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.260 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.260 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.519 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.519 "name": "Existed_Raid", 00:08:07.519 "uuid": "4ebb5ebb-efae-4156-ba64-9127fbc2815d", 00:08:07.519 "strip_size_kb": 64, 00:08:07.519 "state": "configuring", 00:08:07.519 "raid_level": "concat", 00:08:07.519 "superblock": true, 00:08:07.519 "num_base_bdevs": 3, 00:08:07.519 "num_base_bdevs_discovered": 2, 00:08:07.519 "num_base_bdevs_operational": 3, 00:08:07.519 "base_bdevs_list": [ 00:08:07.519 { 00:08:07.519 "name": "BaseBdev1", 00:08:07.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.519 "is_configured": false, 00:08:07.519 "data_offset": 0, 00:08:07.519 "data_size": 0 00:08:07.519 }, 00:08:07.519 { 00:08:07.519 "name": "BaseBdev2", 00:08:07.519 "uuid": "4fae2618-d381-4759-a010-ef21e0e79c62", 00:08:07.519 "is_configured": true, 00:08:07.519 "data_offset": 2048, 00:08:07.519 "data_size": 63488 00:08:07.519 }, 00:08:07.519 { 00:08:07.519 "name": "BaseBdev3", 00:08:07.519 "uuid": "10912edf-03b8-4473-be11-86b95323a925", 00:08:07.519 "is_configured": true, 00:08:07.519 "data_offset": 2048, 00:08:07.519 "data_size": 63488 00:08:07.519 } 00:08:07.519 ] 00:08:07.519 }' 00:08:07.519 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.519 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.779 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:07.779 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.779 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.779 [2024-11-27 21:40:30.759351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:07.779 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.779 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:07.779 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.779 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:07.779 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.780 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.780 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.780 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.780 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.780 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.780 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.780 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.780 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.780 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.780 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.780 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.780 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.780 "name": "Existed_Raid", 00:08:07.780 "uuid": "4ebb5ebb-efae-4156-ba64-9127fbc2815d", 00:08:07.780 "strip_size_kb": 64, 00:08:07.780 "state": "configuring", 00:08:07.780 "raid_level": "concat", 00:08:07.780 "superblock": true, 00:08:07.780 "num_base_bdevs": 3, 00:08:07.780 "num_base_bdevs_discovered": 1, 00:08:07.780 "num_base_bdevs_operational": 3, 00:08:07.780 "base_bdevs_list": [ 00:08:07.780 { 00:08:07.780 "name": "BaseBdev1", 00:08:07.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:07.780 "is_configured": false, 00:08:07.780 "data_offset": 0, 00:08:07.780 "data_size": 0 00:08:07.780 }, 00:08:07.780 { 00:08:07.780 "name": null, 00:08:07.780 "uuid": "4fae2618-d381-4759-a010-ef21e0e79c62", 00:08:07.780 "is_configured": false, 00:08:07.780 "data_offset": 0, 00:08:07.780 "data_size": 63488 00:08:07.780 }, 00:08:07.780 { 00:08:07.780 "name": "BaseBdev3", 00:08:07.780 "uuid": "10912edf-03b8-4473-be11-86b95323a925", 00:08:07.780 "is_configured": true, 00:08:07.780 "data_offset": 2048, 00:08:07.780 "data_size": 63488 00:08:07.780 } 00:08:07.780 ] 00:08:07.780 }' 00:08:07.780 21:40:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.780 21:40:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.349 [2024-11-27 21:40:31.257359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.349 BaseBdev1 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.349 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.349 [ 00:08:08.349 { 00:08:08.349 "name": "BaseBdev1", 00:08:08.349 "aliases": [ 00:08:08.350 "55cf95b6-c9cd-4007-875a-54239bd98f28" 00:08:08.350 ], 00:08:08.350 "product_name": "Malloc disk", 00:08:08.350 "block_size": 512, 00:08:08.350 "num_blocks": 65536, 00:08:08.350 "uuid": "55cf95b6-c9cd-4007-875a-54239bd98f28", 00:08:08.350 "assigned_rate_limits": { 00:08:08.350 "rw_ios_per_sec": 0, 00:08:08.350 "rw_mbytes_per_sec": 0, 00:08:08.350 "r_mbytes_per_sec": 0, 00:08:08.350 "w_mbytes_per_sec": 0 00:08:08.350 }, 00:08:08.350 "claimed": true, 00:08:08.350 "claim_type": "exclusive_write", 00:08:08.350 "zoned": false, 00:08:08.350 "supported_io_types": { 00:08:08.350 "read": true, 00:08:08.350 "write": true, 00:08:08.350 "unmap": true, 00:08:08.350 "flush": true, 00:08:08.350 "reset": true, 00:08:08.350 "nvme_admin": false, 00:08:08.350 "nvme_io": false, 00:08:08.350 "nvme_io_md": false, 00:08:08.350 "write_zeroes": true, 00:08:08.350 "zcopy": true, 00:08:08.350 "get_zone_info": false, 00:08:08.350 "zone_management": false, 00:08:08.350 "zone_append": false, 00:08:08.350 "compare": false, 00:08:08.350 "compare_and_write": false, 00:08:08.350 "abort": true, 00:08:08.350 "seek_hole": false, 00:08:08.350 "seek_data": false, 00:08:08.350 "copy": true, 00:08:08.350 "nvme_iov_md": false 00:08:08.350 }, 00:08:08.350 "memory_domains": [ 00:08:08.350 { 00:08:08.350 "dma_device_id": "system", 00:08:08.350 "dma_device_type": 1 00:08:08.350 }, 00:08:08.350 { 00:08:08.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.350 "dma_device_type": 2 00:08:08.350 } 00:08:08.350 ], 00:08:08.350 "driver_specific": {} 00:08:08.350 } 00:08:08.350 ] 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.350 "name": "Existed_Raid", 00:08:08.350 "uuid": "4ebb5ebb-efae-4156-ba64-9127fbc2815d", 00:08:08.350 "strip_size_kb": 64, 00:08:08.350 "state": "configuring", 00:08:08.350 "raid_level": "concat", 00:08:08.350 "superblock": true, 00:08:08.350 "num_base_bdevs": 3, 00:08:08.350 "num_base_bdevs_discovered": 2, 00:08:08.350 "num_base_bdevs_operational": 3, 00:08:08.350 "base_bdevs_list": [ 00:08:08.350 { 00:08:08.350 "name": "BaseBdev1", 00:08:08.350 "uuid": "55cf95b6-c9cd-4007-875a-54239bd98f28", 00:08:08.350 "is_configured": true, 00:08:08.350 "data_offset": 2048, 00:08:08.350 "data_size": 63488 00:08:08.350 }, 00:08:08.350 { 00:08:08.350 "name": null, 00:08:08.350 "uuid": "4fae2618-d381-4759-a010-ef21e0e79c62", 00:08:08.350 "is_configured": false, 00:08:08.350 "data_offset": 0, 00:08:08.350 "data_size": 63488 00:08:08.350 }, 00:08:08.350 { 00:08:08.350 "name": "BaseBdev3", 00:08:08.350 "uuid": "10912edf-03b8-4473-be11-86b95323a925", 00:08:08.350 "is_configured": true, 00:08:08.350 "data_offset": 2048, 00:08:08.350 "data_size": 63488 00:08:08.350 } 00:08:08.350 ] 00:08:08.350 }' 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.350 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.920 [2024-11-27 21:40:31.804494] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.920 "name": "Existed_Raid", 00:08:08.920 "uuid": "4ebb5ebb-efae-4156-ba64-9127fbc2815d", 00:08:08.920 "strip_size_kb": 64, 00:08:08.920 "state": "configuring", 00:08:08.920 "raid_level": "concat", 00:08:08.920 "superblock": true, 00:08:08.920 "num_base_bdevs": 3, 00:08:08.920 "num_base_bdevs_discovered": 1, 00:08:08.920 "num_base_bdevs_operational": 3, 00:08:08.920 "base_bdevs_list": [ 00:08:08.920 { 00:08:08.920 "name": "BaseBdev1", 00:08:08.920 "uuid": "55cf95b6-c9cd-4007-875a-54239bd98f28", 00:08:08.920 "is_configured": true, 00:08:08.920 "data_offset": 2048, 00:08:08.920 "data_size": 63488 00:08:08.920 }, 00:08:08.920 { 00:08:08.920 "name": null, 00:08:08.920 "uuid": "4fae2618-d381-4759-a010-ef21e0e79c62", 00:08:08.920 "is_configured": false, 00:08:08.920 "data_offset": 0, 00:08:08.920 "data_size": 63488 00:08:08.920 }, 00:08:08.920 { 00:08:08.920 "name": null, 00:08:08.920 "uuid": "10912edf-03b8-4473-be11-86b95323a925", 00:08:08.920 "is_configured": false, 00:08:08.920 "data_offset": 0, 00:08:08.920 "data_size": 63488 00:08:08.920 } 00:08:08.920 ] 00:08:08.920 }' 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.920 21:40:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.180 [2024-11-27 21:40:32.275730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.180 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.441 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.441 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.441 "name": "Existed_Raid", 00:08:09.441 "uuid": "4ebb5ebb-efae-4156-ba64-9127fbc2815d", 00:08:09.441 "strip_size_kb": 64, 00:08:09.441 "state": "configuring", 00:08:09.441 "raid_level": "concat", 00:08:09.441 "superblock": true, 00:08:09.441 "num_base_bdevs": 3, 00:08:09.441 "num_base_bdevs_discovered": 2, 00:08:09.441 "num_base_bdevs_operational": 3, 00:08:09.441 "base_bdevs_list": [ 00:08:09.441 { 00:08:09.441 "name": "BaseBdev1", 00:08:09.441 "uuid": "55cf95b6-c9cd-4007-875a-54239bd98f28", 00:08:09.441 "is_configured": true, 00:08:09.441 "data_offset": 2048, 00:08:09.441 "data_size": 63488 00:08:09.441 }, 00:08:09.441 { 00:08:09.441 "name": null, 00:08:09.441 "uuid": "4fae2618-d381-4759-a010-ef21e0e79c62", 00:08:09.441 "is_configured": false, 00:08:09.441 "data_offset": 0, 00:08:09.441 "data_size": 63488 00:08:09.441 }, 00:08:09.441 { 00:08:09.441 "name": "BaseBdev3", 00:08:09.441 "uuid": "10912edf-03b8-4473-be11-86b95323a925", 00:08:09.441 "is_configured": true, 00:08:09.441 "data_offset": 2048, 00:08:09.441 "data_size": 63488 00:08:09.441 } 00:08:09.441 ] 00:08:09.441 }' 00:08:09.441 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.441 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.701 [2024-11-27 21:40:32.790887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.701 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.961 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.961 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.961 "name": "Existed_Raid", 00:08:09.961 "uuid": "4ebb5ebb-efae-4156-ba64-9127fbc2815d", 00:08:09.961 "strip_size_kb": 64, 00:08:09.961 "state": "configuring", 00:08:09.961 "raid_level": "concat", 00:08:09.961 "superblock": true, 00:08:09.961 "num_base_bdevs": 3, 00:08:09.961 "num_base_bdevs_discovered": 1, 00:08:09.961 "num_base_bdevs_operational": 3, 00:08:09.961 "base_bdevs_list": [ 00:08:09.961 { 00:08:09.961 "name": null, 00:08:09.961 "uuid": "55cf95b6-c9cd-4007-875a-54239bd98f28", 00:08:09.961 "is_configured": false, 00:08:09.961 "data_offset": 0, 00:08:09.961 "data_size": 63488 00:08:09.961 }, 00:08:09.961 { 00:08:09.961 "name": null, 00:08:09.961 "uuid": "4fae2618-d381-4759-a010-ef21e0e79c62", 00:08:09.961 "is_configured": false, 00:08:09.961 "data_offset": 0, 00:08:09.961 "data_size": 63488 00:08:09.961 }, 00:08:09.961 { 00:08:09.961 "name": "BaseBdev3", 00:08:09.961 "uuid": "10912edf-03b8-4473-be11-86b95323a925", 00:08:09.961 "is_configured": true, 00:08:09.961 "data_offset": 2048, 00:08:09.961 "data_size": 63488 00:08:09.961 } 00:08:09.961 ] 00:08:09.961 }' 00:08:09.961 21:40:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.961 21:40:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.221 [2024-11-27 21:40:33.252483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.221 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.221 "name": "Existed_Raid", 00:08:10.221 "uuid": "4ebb5ebb-efae-4156-ba64-9127fbc2815d", 00:08:10.221 "strip_size_kb": 64, 00:08:10.221 "state": "configuring", 00:08:10.221 "raid_level": "concat", 00:08:10.221 "superblock": true, 00:08:10.222 "num_base_bdevs": 3, 00:08:10.222 "num_base_bdevs_discovered": 2, 00:08:10.222 "num_base_bdevs_operational": 3, 00:08:10.222 "base_bdevs_list": [ 00:08:10.222 { 00:08:10.222 "name": null, 00:08:10.222 "uuid": "55cf95b6-c9cd-4007-875a-54239bd98f28", 00:08:10.222 "is_configured": false, 00:08:10.222 "data_offset": 0, 00:08:10.222 "data_size": 63488 00:08:10.222 }, 00:08:10.222 { 00:08:10.222 "name": "BaseBdev2", 00:08:10.222 "uuid": "4fae2618-d381-4759-a010-ef21e0e79c62", 00:08:10.222 "is_configured": true, 00:08:10.222 "data_offset": 2048, 00:08:10.222 "data_size": 63488 00:08:10.222 }, 00:08:10.222 { 00:08:10.222 "name": "BaseBdev3", 00:08:10.222 "uuid": "10912edf-03b8-4473-be11-86b95323a925", 00:08:10.222 "is_configured": true, 00:08:10.222 "data_offset": 2048, 00:08:10.222 "data_size": 63488 00:08:10.222 } 00:08:10.222 ] 00:08:10.222 }' 00:08:10.222 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.222 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 55cf95b6-c9cd-4007-875a-54239bd98f28 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.790 [2024-11-27 21:40:33.782361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:10.790 [2024-11-27 21:40:33.782514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:10.790 [2024-11-27 21:40:33.782531] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:10.790 [2024-11-27 21:40:33.782765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:10.790 [2024-11-27 21:40:33.782914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:10.790 [2024-11-27 21:40:33.782925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:10.790 [2024-11-27 21:40:33.783029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.790 NewBaseBdev 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:10.790 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.791 [ 00:08:10.791 { 00:08:10.791 "name": "NewBaseBdev", 00:08:10.791 "aliases": [ 00:08:10.791 "55cf95b6-c9cd-4007-875a-54239bd98f28" 00:08:10.791 ], 00:08:10.791 "product_name": "Malloc disk", 00:08:10.791 "block_size": 512, 00:08:10.791 "num_blocks": 65536, 00:08:10.791 "uuid": "55cf95b6-c9cd-4007-875a-54239bd98f28", 00:08:10.791 "assigned_rate_limits": { 00:08:10.791 "rw_ios_per_sec": 0, 00:08:10.791 "rw_mbytes_per_sec": 0, 00:08:10.791 "r_mbytes_per_sec": 0, 00:08:10.791 "w_mbytes_per_sec": 0 00:08:10.791 }, 00:08:10.791 "claimed": true, 00:08:10.791 "claim_type": "exclusive_write", 00:08:10.791 "zoned": false, 00:08:10.791 "supported_io_types": { 00:08:10.791 "read": true, 00:08:10.791 "write": true, 00:08:10.791 "unmap": true, 00:08:10.791 "flush": true, 00:08:10.791 "reset": true, 00:08:10.791 "nvme_admin": false, 00:08:10.791 "nvme_io": false, 00:08:10.791 "nvme_io_md": false, 00:08:10.791 "write_zeroes": true, 00:08:10.791 "zcopy": true, 00:08:10.791 "get_zone_info": false, 00:08:10.791 "zone_management": false, 00:08:10.791 "zone_append": false, 00:08:10.791 "compare": false, 00:08:10.791 "compare_and_write": false, 00:08:10.791 "abort": true, 00:08:10.791 "seek_hole": false, 00:08:10.791 "seek_data": false, 00:08:10.791 "copy": true, 00:08:10.791 "nvme_iov_md": false 00:08:10.791 }, 00:08:10.791 "memory_domains": [ 00:08:10.791 { 00:08:10.791 "dma_device_id": "system", 00:08:10.791 "dma_device_type": 1 00:08:10.791 }, 00:08:10.791 { 00:08:10.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.791 "dma_device_type": 2 00:08:10.791 } 00:08:10.791 ], 00:08:10.791 "driver_specific": {} 00:08:10.791 } 00:08:10.791 ] 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.791 "name": "Existed_Raid", 00:08:10.791 "uuid": "4ebb5ebb-efae-4156-ba64-9127fbc2815d", 00:08:10.791 "strip_size_kb": 64, 00:08:10.791 "state": "online", 00:08:10.791 "raid_level": "concat", 00:08:10.791 "superblock": true, 00:08:10.791 "num_base_bdevs": 3, 00:08:10.791 "num_base_bdevs_discovered": 3, 00:08:10.791 "num_base_bdevs_operational": 3, 00:08:10.791 "base_bdevs_list": [ 00:08:10.791 { 00:08:10.791 "name": "NewBaseBdev", 00:08:10.791 "uuid": "55cf95b6-c9cd-4007-875a-54239bd98f28", 00:08:10.791 "is_configured": true, 00:08:10.791 "data_offset": 2048, 00:08:10.791 "data_size": 63488 00:08:10.791 }, 00:08:10.791 { 00:08:10.791 "name": "BaseBdev2", 00:08:10.791 "uuid": "4fae2618-d381-4759-a010-ef21e0e79c62", 00:08:10.791 "is_configured": true, 00:08:10.791 "data_offset": 2048, 00:08:10.791 "data_size": 63488 00:08:10.791 }, 00:08:10.791 { 00:08:10.791 "name": "BaseBdev3", 00:08:10.791 "uuid": "10912edf-03b8-4473-be11-86b95323a925", 00:08:10.791 "is_configured": true, 00:08:10.791 "data_offset": 2048, 00:08:10.791 "data_size": 63488 00:08:10.791 } 00:08:10.791 ] 00:08:10.791 }' 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.791 21:40:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.361 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:11.361 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:11.361 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.361 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.361 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.361 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.361 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.361 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:11.361 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.361 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.361 [2024-11-27 21:40:34.297830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.361 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.361 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.361 "name": "Existed_Raid", 00:08:11.361 "aliases": [ 00:08:11.361 "4ebb5ebb-efae-4156-ba64-9127fbc2815d" 00:08:11.361 ], 00:08:11.361 "product_name": "Raid Volume", 00:08:11.361 "block_size": 512, 00:08:11.361 "num_blocks": 190464, 00:08:11.361 "uuid": "4ebb5ebb-efae-4156-ba64-9127fbc2815d", 00:08:11.361 "assigned_rate_limits": { 00:08:11.361 "rw_ios_per_sec": 0, 00:08:11.361 "rw_mbytes_per_sec": 0, 00:08:11.361 "r_mbytes_per_sec": 0, 00:08:11.361 "w_mbytes_per_sec": 0 00:08:11.361 }, 00:08:11.361 "claimed": false, 00:08:11.361 "zoned": false, 00:08:11.361 "supported_io_types": { 00:08:11.361 "read": true, 00:08:11.361 "write": true, 00:08:11.361 "unmap": true, 00:08:11.361 "flush": true, 00:08:11.361 "reset": true, 00:08:11.361 "nvme_admin": false, 00:08:11.361 "nvme_io": false, 00:08:11.361 "nvme_io_md": false, 00:08:11.361 "write_zeroes": true, 00:08:11.361 "zcopy": false, 00:08:11.361 "get_zone_info": false, 00:08:11.361 "zone_management": false, 00:08:11.361 "zone_append": false, 00:08:11.361 "compare": false, 00:08:11.361 "compare_and_write": false, 00:08:11.361 "abort": false, 00:08:11.361 "seek_hole": false, 00:08:11.361 "seek_data": false, 00:08:11.361 "copy": false, 00:08:11.361 "nvme_iov_md": false 00:08:11.361 }, 00:08:11.361 "memory_domains": [ 00:08:11.361 { 00:08:11.361 "dma_device_id": "system", 00:08:11.361 "dma_device_type": 1 00:08:11.361 }, 00:08:11.361 { 00:08:11.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.361 "dma_device_type": 2 00:08:11.361 }, 00:08:11.361 { 00:08:11.361 "dma_device_id": "system", 00:08:11.361 "dma_device_type": 1 00:08:11.361 }, 00:08:11.361 { 00:08:11.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.362 "dma_device_type": 2 00:08:11.362 }, 00:08:11.362 { 00:08:11.362 "dma_device_id": "system", 00:08:11.362 "dma_device_type": 1 00:08:11.362 }, 00:08:11.362 { 00:08:11.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.362 "dma_device_type": 2 00:08:11.362 } 00:08:11.362 ], 00:08:11.362 "driver_specific": { 00:08:11.362 "raid": { 00:08:11.362 "uuid": "4ebb5ebb-efae-4156-ba64-9127fbc2815d", 00:08:11.362 "strip_size_kb": 64, 00:08:11.362 "state": "online", 00:08:11.362 "raid_level": "concat", 00:08:11.362 "superblock": true, 00:08:11.362 "num_base_bdevs": 3, 00:08:11.362 "num_base_bdevs_discovered": 3, 00:08:11.362 "num_base_bdevs_operational": 3, 00:08:11.362 "base_bdevs_list": [ 00:08:11.362 { 00:08:11.362 "name": "NewBaseBdev", 00:08:11.362 "uuid": "55cf95b6-c9cd-4007-875a-54239bd98f28", 00:08:11.362 "is_configured": true, 00:08:11.362 "data_offset": 2048, 00:08:11.362 "data_size": 63488 00:08:11.362 }, 00:08:11.362 { 00:08:11.362 "name": "BaseBdev2", 00:08:11.362 "uuid": "4fae2618-d381-4759-a010-ef21e0e79c62", 00:08:11.362 "is_configured": true, 00:08:11.362 "data_offset": 2048, 00:08:11.362 "data_size": 63488 00:08:11.362 }, 00:08:11.362 { 00:08:11.362 "name": "BaseBdev3", 00:08:11.362 "uuid": "10912edf-03b8-4473-be11-86b95323a925", 00:08:11.362 "is_configured": true, 00:08:11.362 "data_offset": 2048, 00:08:11.362 "data_size": 63488 00:08:11.362 } 00:08:11.362 ] 00:08:11.362 } 00:08:11.362 } 00:08:11.362 }' 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:11.362 BaseBdev2 00:08:11.362 BaseBdev3' 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.362 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.622 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.622 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.622 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.622 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.622 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.622 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.622 [2024-11-27 21:40:34.529134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.622 [2024-11-27 21:40:34.529206] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.622 [2024-11-27 21:40:34.529285] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.622 [2024-11-27 21:40:34.529344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.622 [2024-11-27 21:40:34.529357] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:11.622 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.622 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 77084 00:08:11.622 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77084 ']' 00:08:11.622 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 77084 00:08:11.622 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:11.622 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.623 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77084 00:08:11.623 killing process with pid 77084 00:08:11.623 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.623 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.623 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77084' 00:08:11.623 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 77084 00:08:11.623 [2024-11-27 21:40:34.576060] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.623 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 77084 00:08:11.623 [2024-11-27 21:40:34.605784] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:11.882 21:40:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:11.882 00:08:11.882 real 0m8.725s 00:08:11.882 user 0m15.004s 00:08:11.882 sys 0m1.668s 00:08:11.882 ************************************ 00:08:11.882 END TEST raid_state_function_test_sb 00:08:11.882 ************************************ 00:08:11.882 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.882 21:40:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.882 21:40:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:11.882 21:40:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:11.883 21:40:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.883 21:40:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:11.883 ************************************ 00:08:11.883 START TEST raid_superblock_test 00:08:11.883 ************************************ 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=77688 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 77688 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 77688 ']' 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.883 21:40:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.883 [2024-11-27 21:40:34.970805] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:11.883 [2024-11-27 21:40:34.971020] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77688 ] 00:08:12.142 [2024-11-27 21:40:35.105158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.142 [2024-11-27 21:40:35.131273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.142 [2024-11-27 21:40:35.174169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.142 [2024-11-27 21:40:35.174210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.710 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.711 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:12.711 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:12.711 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.711 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:12.711 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:12.711 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:12.711 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:12.711 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:12.711 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:12.711 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:12.711 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.711 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.711 malloc1 00:08:12.711 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.711 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.970 [2024-11-27 21:40:35.837848] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:12.970 [2024-11-27 21:40:35.837963] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.970 [2024-11-27 21:40:35.838010] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:12.970 [2024-11-27 21:40:35.838046] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.970 [2024-11-27 21:40:35.840204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.970 [2024-11-27 21:40:35.840283] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:12.970 pt1 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.970 malloc2 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.970 [2024-11-27 21:40:35.870327] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:12.970 [2024-11-27 21:40:35.870377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.970 [2024-11-27 21:40:35.870411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:12.970 [2024-11-27 21:40:35.870420] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.970 [2024-11-27 21:40:35.872536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.970 [2024-11-27 21:40:35.872572] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:12.970 pt2 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.970 malloc3 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.970 [2024-11-27 21:40:35.898750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:12.970 [2024-11-27 21:40:35.898861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:12.970 [2024-11-27 21:40:35.898914] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:12.970 [2024-11-27 21:40:35.898955] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:12.970 [2024-11-27 21:40:35.901020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:12.970 [2024-11-27 21:40:35.901089] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:12.970 pt3 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.970 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.970 [2024-11-27 21:40:35.910812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:12.970 [2024-11-27 21:40:35.912628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:12.970 [2024-11-27 21:40:35.912721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:12.970 [2024-11-27 21:40:35.912912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:12.970 [2024-11-27 21:40:35.912958] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:12.970 [2024-11-27 21:40:35.913256] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:12.970 [2024-11-27 21:40:35.913437] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:12.970 [2024-11-27 21:40:35.913480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:12.971 [2024-11-27 21:40:35.913661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.971 "name": "raid_bdev1", 00:08:12.971 "uuid": "f9b953f3-87e7-47b7-86e7-2f6da68bbb5c", 00:08:12.971 "strip_size_kb": 64, 00:08:12.971 "state": "online", 00:08:12.971 "raid_level": "concat", 00:08:12.971 "superblock": true, 00:08:12.971 "num_base_bdevs": 3, 00:08:12.971 "num_base_bdevs_discovered": 3, 00:08:12.971 "num_base_bdevs_operational": 3, 00:08:12.971 "base_bdevs_list": [ 00:08:12.971 { 00:08:12.971 "name": "pt1", 00:08:12.971 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:12.971 "is_configured": true, 00:08:12.971 "data_offset": 2048, 00:08:12.971 "data_size": 63488 00:08:12.971 }, 00:08:12.971 { 00:08:12.971 "name": "pt2", 00:08:12.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:12.971 "is_configured": true, 00:08:12.971 "data_offset": 2048, 00:08:12.971 "data_size": 63488 00:08:12.971 }, 00:08:12.971 { 00:08:12.971 "name": "pt3", 00:08:12.971 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:12.971 "is_configured": true, 00:08:12.971 "data_offset": 2048, 00:08:12.971 "data_size": 63488 00:08:12.971 } 00:08:12.971 ] 00:08:12.971 }' 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.971 21:40:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.539 [2024-11-27 21:40:36.386265] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.539 "name": "raid_bdev1", 00:08:13.539 "aliases": [ 00:08:13.539 "f9b953f3-87e7-47b7-86e7-2f6da68bbb5c" 00:08:13.539 ], 00:08:13.539 "product_name": "Raid Volume", 00:08:13.539 "block_size": 512, 00:08:13.539 "num_blocks": 190464, 00:08:13.539 "uuid": "f9b953f3-87e7-47b7-86e7-2f6da68bbb5c", 00:08:13.539 "assigned_rate_limits": { 00:08:13.539 "rw_ios_per_sec": 0, 00:08:13.539 "rw_mbytes_per_sec": 0, 00:08:13.539 "r_mbytes_per_sec": 0, 00:08:13.539 "w_mbytes_per_sec": 0 00:08:13.539 }, 00:08:13.539 "claimed": false, 00:08:13.539 "zoned": false, 00:08:13.539 "supported_io_types": { 00:08:13.539 "read": true, 00:08:13.539 "write": true, 00:08:13.539 "unmap": true, 00:08:13.539 "flush": true, 00:08:13.539 "reset": true, 00:08:13.539 "nvme_admin": false, 00:08:13.539 "nvme_io": false, 00:08:13.539 "nvme_io_md": false, 00:08:13.539 "write_zeroes": true, 00:08:13.539 "zcopy": false, 00:08:13.539 "get_zone_info": false, 00:08:13.539 "zone_management": false, 00:08:13.539 "zone_append": false, 00:08:13.539 "compare": false, 00:08:13.539 "compare_and_write": false, 00:08:13.539 "abort": false, 00:08:13.539 "seek_hole": false, 00:08:13.539 "seek_data": false, 00:08:13.539 "copy": false, 00:08:13.539 "nvme_iov_md": false 00:08:13.539 }, 00:08:13.539 "memory_domains": [ 00:08:13.539 { 00:08:13.539 "dma_device_id": "system", 00:08:13.539 "dma_device_type": 1 00:08:13.539 }, 00:08:13.539 { 00:08:13.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.539 "dma_device_type": 2 00:08:13.539 }, 00:08:13.539 { 00:08:13.539 "dma_device_id": "system", 00:08:13.539 "dma_device_type": 1 00:08:13.539 }, 00:08:13.539 { 00:08:13.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.539 "dma_device_type": 2 00:08:13.539 }, 00:08:13.539 { 00:08:13.539 "dma_device_id": "system", 00:08:13.539 "dma_device_type": 1 00:08:13.539 }, 00:08:13.539 { 00:08:13.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.539 "dma_device_type": 2 00:08:13.539 } 00:08:13.539 ], 00:08:13.539 "driver_specific": { 00:08:13.539 "raid": { 00:08:13.539 "uuid": "f9b953f3-87e7-47b7-86e7-2f6da68bbb5c", 00:08:13.539 "strip_size_kb": 64, 00:08:13.539 "state": "online", 00:08:13.539 "raid_level": "concat", 00:08:13.539 "superblock": true, 00:08:13.539 "num_base_bdevs": 3, 00:08:13.539 "num_base_bdevs_discovered": 3, 00:08:13.539 "num_base_bdevs_operational": 3, 00:08:13.539 "base_bdevs_list": [ 00:08:13.539 { 00:08:13.539 "name": "pt1", 00:08:13.539 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:13.539 "is_configured": true, 00:08:13.539 "data_offset": 2048, 00:08:13.539 "data_size": 63488 00:08:13.539 }, 00:08:13.539 { 00:08:13.539 "name": "pt2", 00:08:13.539 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:13.539 "is_configured": true, 00:08:13.539 "data_offset": 2048, 00:08:13.539 "data_size": 63488 00:08:13.539 }, 00:08:13.539 { 00:08:13.539 "name": "pt3", 00:08:13.539 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:13.539 "is_configured": true, 00:08:13.539 "data_offset": 2048, 00:08:13.539 "data_size": 63488 00:08:13.539 } 00:08:13.539 ] 00:08:13.539 } 00:08:13.539 } 00:08:13.539 }' 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:13.539 pt2 00:08:13.539 pt3' 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.539 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.798 [2024-11-27 21:40:36.661766] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.798 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.798 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f9b953f3-87e7-47b7-86e7-2f6da68bbb5c 00:08:13.798 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f9b953f3-87e7-47b7-86e7-2f6da68bbb5c ']' 00:08:13.798 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.798 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.798 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.798 [2024-11-27 21:40:36.689455] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.798 [2024-11-27 21:40:36.689480] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.798 [2024-11-27 21:40:36.689546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.798 [2024-11-27 21:40:36.689602] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.798 [2024-11-27 21:40:36.689616] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:13.798 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.798 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.798 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.798 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.798 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.799 [2024-11-27 21:40:36.841220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:13.799 [2024-11-27 21:40:36.843060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:13.799 [2024-11-27 21:40:36.843103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:13.799 [2024-11-27 21:40:36.843150] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:13.799 [2024-11-27 21:40:36.843191] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:13.799 [2024-11-27 21:40:36.843222] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:13.799 [2024-11-27 21:40:36.843234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.799 [2024-11-27 21:40:36.843244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:13.799 request: 00:08:13.799 { 00:08:13.799 "name": "raid_bdev1", 00:08:13.799 "raid_level": "concat", 00:08:13.799 "base_bdevs": [ 00:08:13.799 "malloc1", 00:08:13.799 "malloc2", 00:08:13.799 "malloc3" 00:08:13.799 ], 00:08:13.799 "strip_size_kb": 64, 00:08:13.799 "superblock": false, 00:08:13.799 "method": "bdev_raid_create", 00:08:13.799 "req_id": 1 00:08:13.799 } 00:08:13.799 Got JSON-RPC error response 00:08:13.799 response: 00:08:13.799 { 00:08:13.799 "code": -17, 00:08:13.799 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:13.799 } 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.799 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.799 [2024-11-27 21:40:36.917057] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:13.799 [2024-11-27 21:40:36.917147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.799 [2024-11-27 21:40:36.917198] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:13.799 [2024-11-27 21:40:36.917250] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.061 [2024-11-27 21:40:36.919581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.061 [2024-11-27 21:40:36.919652] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:14.061 [2024-11-27 21:40:36.919760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:14.061 [2024-11-27 21:40:36.919865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:14.061 pt1 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.061 "name": "raid_bdev1", 00:08:14.061 "uuid": "f9b953f3-87e7-47b7-86e7-2f6da68bbb5c", 00:08:14.061 "strip_size_kb": 64, 00:08:14.061 "state": "configuring", 00:08:14.061 "raid_level": "concat", 00:08:14.061 "superblock": true, 00:08:14.061 "num_base_bdevs": 3, 00:08:14.061 "num_base_bdevs_discovered": 1, 00:08:14.061 "num_base_bdevs_operational": 3, 00:08:14.061 "base_bdevs_list": [ 00:08:14.061 { 00:08:14.061 "name": "pt1", 00:08:14.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.061 "is_configured": true, 00:08:14.061 "data_offset": 2048, 00:08:14.061 "data_size": 63488 00:08:14.061 }, 00:08:14.061 { 00:08:14.061 "name": null, 00:08:14.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.061 "is_configured": false, 00:08:14.061 "data_offset": 2048, 00:08:14.061 "data_size": 63488 00:08:14.061 }, 00:08:14.061 { 00:08:14.061 "name": null, 00:08:14.061 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:14.061 "is_configured": false, 00:08:14.061 "data_offset": 2048, 00:08:14.061 "data_size": 63488 00:08:14.061 } 00:08:14.061 ] 00:08:14.061 }' 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.061 21:40:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.322 [2024-11-27 21:40:37.344375] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.322 [2024-11-27 21:40:37.344444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.322 [2024-11-27 21:40:37.344465] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:14.322 [2024-11-27 21:40:37.344479] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.322 [2024-11-27 21:40:37.344890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.322 [2024-11-27 21:40:37.344916] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.322 [2024-11-27 21:40:37.344999] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:14.322 [2024-11-27 21:40:37.345024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.322 pt2 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.322 [2024-11-27 21:40:37.356354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.322 "name": "raid_bdev1", 00:08:14.322 "uuid": "f9b953f3-87e7-47b7-86e7-2f6da68bbb5c", 00:08:14.322 "strip_size_kb": 64, 00:08:14.322 "state": "configuring", 00:08:14.322 "raid_level": "concat", 00:08:14.322 "superblock": true, 00:08:14.322 "num_base_bdevs": 3, 00:08:14.322 "num_base_bdevs_discovered": 1, 00:08:14.322 "num_base_bdevs_operational": 3, 00:08:14.322 "base_bdevs_list": [ 00:08:14.322 { 00:08:14.322 "name": "pt1", 00:08:14.322 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.322 "is_configured": true, 00:08:14.322 "data_offset": 2048, 00:08:14.322 "data_size": 63488 00:08:14.322 }, 00:08:14.322 { 00:08:14.322 "name": null, 00:08:14.322 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.322 "is_configured": false, 00:08:14.322 "data_offset": 0, 00:08:14.322 "data_size": 63488 00:08:14.322 }, 00:08:14.322 { 00:08:14.322 "name": null, 00:08:14.322 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:14.322 "is_configured": false, 00:08:14.322 "data_offset": 2048, 00:08:14.322 "data_size": 63488 00:08:14.322 } 00:08:14.322 ] 00:08:14.322 }' 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.322 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.892 [2024-11-27 21:40:37.783678] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.892 [2024-11-27 21:40:37.783781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.892 [2024-11-27 21:40:37.783833] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:14.892 [2024-11-27 21:40:37.783860] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.892 [2024-11-27 21:40:37.784348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.892 [2024-11-27 21:40:37.784412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.892 [2024-11-27 21:40:37.784535] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:14.892 [2024-11-27 21:40:37.784589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.892 pt2 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.892 [2024-11-27 21:40:37.795640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:14.892 [2024-11-27 21:40:37.795720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.892 [2024-11-27 21:40:37.795753] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:14.892 [2024-11-27 21:40:37.795781] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.892 [2024-11-27 21:40:37.796170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.892 [2024-11-27 21:40:37.796229] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:14.892 [2024-11-27 21:40:37.796328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:14.892 [2024-11-27 21:40:37.796375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:14.892 [2024-11-27 21:40:37.796502] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:14.892 [2024-11-27 21:40:37.796540] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:14.892 [2024-11-27 21:40:37.796838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:14.892 [2024-11-27 21:40:37.797001] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:14.892 [2024-11-27 21:40:37.797046] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:14.892 [2024-11-27 21:40:37.797219] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.892 pt3 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.892 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.893 "name": "raid_bdev1", 00:08:14.893 "uuid": "f9b953f3-87e7-47b7-86e7-2f6da68bbb5c", 00:08:14.893 "strip_size_kb": 64, 00:08:14.893 "state": "online", 00:08:14.893 "raid_level": "concat", 00:08:14.893 "superblock": true, 00:08:14.893 "num_base_bdevs": 3, 00:08:14.893 "num_base_bdevs_discovered": 3, 00:08:14.893 "num_base_bdevs_operational": 3, 00:08:14.893 "base_bdevs_list": [ 00:08:14.893 { 00:08:14.893 "name": "pt1", 00:08:14.893 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.893 "is_configured": true, 00:08:14.893 "data_offset": 2048, 00:08:14.893 "data_size": 63488 00:08:14.893 }, 00:08:14.893 { 00:08:14.893 "name": "pt2", 00:08:14.893 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.893 "is_configured": true, 00:08:14.893 "data_offset": 2048, 00:08:14.893 "data_size": 63488 00:08:14.893 }, 00:08:14.893 { 00:08:14.893 "name": "pt3", 00:08:14.893 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:14.893 "is_configured": true, 00:08:14.893 "data_offset": 2048, 00:08:14.893 "data_size": 63488 00:08:14.893 } 00:08:14.893 ] 00:08:14.893 }' 00:08:14.893 21:40:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.893 21:40:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.153 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:15.153 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:15.153 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:15.153 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:15.153 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:15.153 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:15.153 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:15.153 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.153 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.153 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.153 [2024-11-27 21:40:38.199240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.153 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.153 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:15.153 "name": "raid_bdev1", 00:08:15.153 "aliases": [ 00:08:15.153 "f9b953f3-87e7-47b7-86e7-2f6da68bbb5c" 00:08:15.153 ], 00:08:15.153 "product_name": "Raid Volume", 00:08:15.153 "block_size": 512, 00:08:15.153 "num_blocks": 190464, 00:08:15.153 "uuid": "f9b953f3-87e7-47b7-86e7-2f6da68bbb5c", 00:08:15.153 "assigned_rate_limits": { 00:08:15.153 "rw_ios_per_sec": 0, 00:08:15.153 "rw_mbytes_per_sec": 0, 00:08:15.153 "r_mbytes_per_sec": 0, 00:08:15.153 "w_mbytes_per_sec": 0 00:08:15.153 }, 00:08:15.153 "claimed": false, 00:08:15.153 "zoned": false, 00:08:15.153 "supported_io_types": { 00:08:15.153 "read": true, 00:08:15.153 "write": true, 00:08:15.153 "unmap": true, 00:08:15.153 "flush": true, 00:08:15.153 "reset": true, 00:08:15.153 "nvme_admin": false, 00:08:15.153 "nvme_io": false, 00:08:15.153 "nvme_io_md": false, 00:08:15.153 "write_zeroes": true, 00:08:15.153 "zcopy": false, 00:08:15.153 "get_zone_info": false, 00:08:15.153 "zone_management": false, 00:08:15.153 "zone_append": false, 00:08:15.153 "compare": false, 00:08:15.153 "compare_and_write": false, 00:08:15.153 "abort": false, 00:08:15.153 "seek_hole": false, 00:08:15.153 "seek_data": false, 00:08:15.153 "copy": false, 00:08:15.153 "nvme_iov_md": false 00:08:15.153 }, 00:08:15.153 "memory_domains": [ 00:08:15.153 { 00:08:15.153 "dma_device_id": "system", 00:08:15.153 "dma_device_type": 1 00:08:15.153 }, 00:08:15.153 { 00:08:15.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.153 "dma_device_type": 2 00:08:15.153 }, 00:08:15.153 { 00:08:15.153 "dma_device_id": "system", 00:08:15.153 "dma_device_type": 1 00:08:15.153 }, 00:08:15.153 { 00:08:15.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.153 "dma_device_type": 2 00:08:15.153 }, 00:08:15.153 { 00:08:15.153 "dma_device_id": "system", 00:08:15.153 "dma_device_type": 1 00:08:15.153 }, 00:08:15.153 { 00:08:15.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.153 "dma_device_type": 2 00:08:15.153 } 00:08:15.153 ], 00:08:15.153 "driver_specific": { 00:08:15.153 "raid": { 00:08:15.153 "uuid": "f9b953f3-87e7-47b7-86e7-2f6da68bbb5c", 00:08:15.153 "strip_size_kb": 64, 00:08:15.153 "state": "online", 00:08:15.153 "raid_level": "concat", 00:08:15.153 "superblock": true, 00:08:15.153 "num_base_bdevs": 3, 00:08:15.153 "num_base_bdevs_discovered": 3, 00:08:15.153 "num_base_bdevs_operational": 3, 00:08:15.153 "base_bdevs_list": [ 00:08:15.153 { 00:08:15.153 "name": "pt1", 00:08:15.153 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.153 "is_configured": true, 00:08:15.153 "data_offset": 2048, 00:08:15.153 "data_size": 63488 00:08:15.153 }, 00:08:15.153 { 00:08:15.153 "name": "pt2", 00:08:15.153 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.153 "is_configured": true, 00:08:15.153 "data_offset": 2048, 00:08:15.153 "data_size": 63488 00:08:15.153 }, 00:08:15.153 { 00:08:15.153 "name": "pt3", 00:08:15.153 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:15.153 "is_configured": true, 00:08:15.153 "data_offset": 2048, 00:08:15.153 "data_size": 63488 00:08:15.153 } 00:08:15.153 ] 00:08:15.153 } 00:08:15.153 } 00:08:15.153 }' 00:08:15.153 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:15.413 pt2 00:08:15.413 pt3' 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.413 [2024-11-27 21:40:38.470711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f9b953f3-87e7-47b7-86e7-2f6da68bbb5c '!=' f9b953f3-87e7-47b7-86e7-2f6da68bbb5c ']' 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 77688 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 77688 ']' 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 77688 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.413 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77688 00:08:15.673 killing process with pid 77688 00:08:15.674 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.674 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.674 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77688' 00:08:15.674 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 77688 00:08:15.674 [2024-11-27 21:40:38.533814] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.674 [2024-11-27 21:40:38.533916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.674 [2024-11-27 21:40:38.533981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.674 [2024-11-27 21:40:38.533991] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:15.674 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 77688 00:08:15.674 [2024-11-27 21:40:38.566199] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:15.674 21:40:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:15.674 00:08:15.674 real 0m3.890s 00:08:15.674 user 0m6.158s 00:08:15.674 sys 0m0.813s 00:08:15.674 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.674 ************************************ 00:08:15.674 END TEST raid_superblock_test 00:08:15.674 ************************************ 00:08:15.674 21:40:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.934 21:40:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:15.934 21:40:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:15.934 21:40:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.934 21:40:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:15.934 ************************************ 00:08:15.934 START TEST raid_read_error_test 00:08:15.934 ************************************ 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1w5eg4n5IZ 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=77930 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 77930 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 77930 ']' 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.934 21:40:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.934 [2024-11-27 21:40:38.950245] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:15.934 [2024-11-27 21:40:38.950366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77930 ] 00:08:16.194 [2024-11-27 21:40:39.102645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.194 [2024-11-27 21:40:39.127126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.194 [2024-11-27 21:40:39.168763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.194 [2024-11-27 21:40:39.168826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.763 BaseBdev1_malloc 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.763 true 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.763 [2024-11-27 21:40:39.815531] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:16.763 [2024-11-27 21:40:39.815581] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.763 [2024-11-27 21:40:39.815625] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:16.763 [2024-11-27 21:40:39.815633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.763 [2024-11-27 21:40:39.817804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.763 [2024-11-27 21:40:39.817879] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:16.763 BaseBdev1 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.763 BaseBdev2_malloc 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:16.763 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.764 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.764 true 00:08:16.764 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.764 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:16.764 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.764 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.764 [2024-11-27 21:40:39.856008] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:16.764 [2024-11-27 21:40:39.856054] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:16.764 [2024-11-27 21:40:39.856072] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:16.764 [2024-11-27 21:40:39.856087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:16.764 [2024-11-27 21:40:39.858166] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:16.764 [2024-11-27 21:40:39.858203] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:16.764 BaseBdev2 00:08:16.764 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.764 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:16.764 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:16.764 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.764 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.764 BaseBdev3_malloc 00:08:16.764 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.764 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:16.764 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.764 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.024 true 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.024 [2024-11-27 21:40:39.896512] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:17.024 [2024-11-27 21:40:39.896560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:17.024 [2024-11-27 21:40:39.896578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:17.024 [2024-11-27 21:40:39.896587] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:17.024 [2024-11-27 21:40:39.898701] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:17.024 [2024-11-27 21:40:39.898735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:17.024 BaseBdev3 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.024 [2024-11-27 21:40:39.908547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.024 [2024-11-27 21:40:39.910448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.024 [2024-11-27 21:40:39.910518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:17.024 [2024-11-27 21:40:39.910688] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:17.024 [2024-11-27 21:40:39.910702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:17.024 [2024-11-27 21:40:39.910962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:08:17.024 [2024-11-27 21:40:39.911099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:17.024 [2024-11-27 21:40:39.911109] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:17.024 [2024-11-27 21:40:39.911224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.024 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.024 "name": "raid_bdev1", 00:08:17.024 "uuid": "3120b83b-c77c-4c08-891b-49aec8561fb9", 00:08:17.024 "strip_size_kb": 64, 00:08:17.025 "state": "online", 00:08:17.025 "raid_level": "concat", 00:08:17.025 "superblock": true, 00:08:17.025 "num_base_bdevs": 3, 00:08:17.025 "num_base_bdevs_discovered": 3, 00:08:17.025 "num_base_bdevs_operational": 3, 00:08:17.025 "base_bdevs_list": [ 00:08:17.025 { 00:08:17.025 "name": "BaseBdev1", 00:08:17.025 "uuid": "bc3c2145-1d75-5257-8dc9-f873f41806f1", 00:08:17.025 "is_configured": true, 00:08:17.025 "data_offset": 2048, 00:08:17.025 "data_size": 63488 00:08:17.025 }, 00:08:17.025 { 00:08:17.025 "name": "BaseBdev2", 00:08:17.025 "uuid": "491554da-f03b-528d-9952-ff8499dde236", 00:08:17.025 "is_configured": true, 00:08:17.025 "data_offset": 2048, 00:08:17.025 "data_size": 63488 00:08:17.025 }, 00:08:17.025 { 00:08:17.025 "name": "BaseBdev3", 00:08:17.025 "uuid": "bde10dbb-f762-522e-a613-ddfde113f63b", 00:08:17.025 "is_configured": true, 00:08:17.025 "data_offset": 2048, 00:08:17.025 "data_size": 63488 00:08:17.025 } 00:08:17.025 ] 00:08:17.025 }' 00:08:17.025 21:40:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.025 21:40:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.285 21:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:17.285 21:40:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:17.545 [2024-11-27 21:40:40.488066] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.485 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.486 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.486 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.486 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.486 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.486 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.486 "name": "raid_bdev1", 00:08:18.486 "uuid": "3120b83b-c77c-4c08-891b-49aec8561fb9", 00:08:18.486 "strip_size_kb": 64, 00:08:18.486 "state": "online", 00:08:18.486 "raid_level": "concat", 00:08:18.486 "superblock": true, 00:08:18.486 "num_base_bdevs": 3, 00:08:18.486 "num_base_bdevs_discovered": 3, 00:08:18.486 "num_base_bdevs_operational": 3, 00:08:18.486 "base_bdevs_list": [ 00:08:18.486 { 00:08:18.486 "name": "BaseBdev1", 00:08:18.486 "uuid": "bc3c2145-1d75-5257-8dc9-f873f41806f1", 00:08:18.486 "is_configured": true, 00:08:18.486 "data_offset": 2048, 00:08:18.486 "data_size": 63488 00:08:18.486 }, 00:08:18.486 { 00:08:18.486 "name": "BaseBdev2", 00:08:18.486 "uuid": "491554da-f03b-528d-9952-ff8499dde236", 00:08:18.486 "is_configured": true, 00:08:18.486 "data_offset": 2048, 00:08:18.486 "data_size": 63488 00:08:18.486 }, 00:08:18.486 { 00:08:18.486 "name": "BaseBdev3", 00:08:18.486 "uuid": "bde10dbb-f762-522e-a613-ddfde113f63b", 00:08:18.486 "is_configured": true, 00:08:18.486 "data_offset": 2048, 00:08:18.486 "data_size": 63488 00:08:18.486 } 00:08:18.486 ] 00:08:18.486 }' 00:08:18.486 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.486 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.746 [2024-11-27 21:40:41.831521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:18.746 [2024-11-27 21:40:41.831611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.746 [2024-11-27 21:40:41.834254] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.746 [2024-11-27 21:40:41.834361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.746 [2024-11-27 21:40:41.834428] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.746 [2024-11-27 21:40:41.834488] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.746 { 00:08:18.746 "results": [ 00:08:18.746 { 00:08:18.746 "job": "raid_bdev1", 00:08:18.746 "core_mask": "0x1", 00:08:18.746 "workload": "randrw", 00:08:18.746 "percentage": 50, 00:08:18.746 "status": "finished", 00:08:18.746 "queue_depth": 1, 00:08:18.746 "io_size": 131072, 00:08:18.746 "runtime": 1.344389, 00:08:18.746 "iops": 16616.470381712435, 00:08:18.746 "mibps": 2077.0587977140544, 00:08:18.746 "io_failed": 1, 00:08:18.746 "io_timeout": 0, 00:08:18.746 "avg_latency_us": 83.0737203910975, 00:08:18.746 "min_latency_us": 24.929257641921396, 00:08:18.746 "max_latency_us": 1359.3711790393013 00:08:18.746 } 00:08:18.746 ], 00:08:18.746 "core_count": 1 00:08:18.746 } 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 77930 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 77930 ']' 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 77930 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77930 00:08:18.746 killing process with pid 77930 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77930' 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 77930 00:08:18.746 [2024-11-27 21:40:41.866261] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.746 21:40:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 77930 00:08:19.006 [2024-11-27 21:40:41.891786] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:19.006 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1w5eg4n5IZ 00:08:19.006 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:19.006 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:19.006 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:19.006 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:19.006 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:19.006 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:19.006 ************************************ 00:08:19.006 END TEST raid_read_error_test 00:08:19.006 ************************************ 00:08:19.006 21:40:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:19.006 00:08:19.006 real 0m3.248s 00:08:19.006 user 0m4.167s 00:08:19.006 sys 0m0.488s 00:08:19.006 21:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.006 21:40:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.266 21:40:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:08:19.266 21:40:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:19.266 21:40:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.266 21:40:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:19.266 ************************************ 00:08:19.266 START TEST raid_write_error_test 00:08:19.266 ************************************ 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.E0YrcYmP52 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=78059 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 78059 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 78059 ']' 00:08:19.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.266 21:40:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.266 [2024-11-27 21:40:42.274322] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:19.267 [2024-11-27 21:40:42.274548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78059 ] 00:08:19.526 [2024-11-27 21:40:42.406126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.526 [2024-11-27 21:40:42.430174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.526 [2024-11-27 21:40:42.472079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.526 [2024-11-27 21:40:42.472118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.095 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.096 BaseBdev1_malloc 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.096 true 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.096 [2024-11-27 21:40:43.127029] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:20.096 [2024-11-27 21:40:43.127079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.096 [2024-11-27 21:40:43.127124] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:20.096 [2024-11-27 21:40:43.127134] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.096 [2024-11-27 21:40:43.129300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.096 [2024-11-27 21:40:43.129345] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:20.096 BaseBdev1 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.096 BaseBdev2_malloc 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.096 true 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.096 [2024-11-27 21:40:43.167467] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:20.096 [2024-11-27 21:40:43.167548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.096 [2024-11-27 21:40:43.167585] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:20.096 [2024-11-27 21:40:43.167603] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.096 [2024-11-27 21:40:43.169718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.096 [2024-11-27 21:40:43.169753] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:20.096 BaseBdev2 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.096 BaseBdev3_malloc 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.096 true 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.096 [2024-11-27 21:40:43.207860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:20.096 [2024-11-27 21:40:43.207901] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.096 [2024-11-27 21:40:43.207934] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:20.096 [2024-11-27 21:40:43.207942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.096 [2024-11-27 21:40:43.209920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.096 [2024-11-27 21:40:43.210001] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:20.096 BaseBdev3 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.096 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.356 [2024-11-27 21:40:43.219912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:20.356 [2024-11-27 21:40:43.221750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.356 [2024-11-27 21:40:43.221837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:20.356 [2024-11-27 21:40:43.222002] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:20.356 [2024-11-27 21:40:43.222016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:20.356 [2024-11-27 21:40:43.222264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:08:20.356 [2024-11-27 21:40:43.222406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:20.356 [2024-11-27 21:40:43.222420] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:20.356 [2024-11-27 21:40:43.222550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.356 "name": "raid_bdev1", 00:08:20.356 "uuid": "fb958183-7d6b-4394-89ac-a21c13f5df53", 00:08:20.356 "strip_size_kb": 64, 00:08:20.356 "state": "online", 00:08:20.356 "raid_level": "concat", 00:08:20.356 "superblock": true, 00:08:20.356 "num_base_bdevs": 3, 00:08:20.356 "num_base_bdevs_discovered": 3, 00:08:20.356 "num_base_bdevs_operational": 3, 00:08:20.356 "base_bdevs_list": [ 00:08:20.356 { 00:08:20.356 "name": "BaseBdev1", 00:08:20.356 "uuid": "6ca5af7e-3c2e-55c5-a41d-a99ebe872e8f", 00:08:20.356 "is_configured": true, 00:08:20.356 "data_offset": 2048, 00:08:20.356 "data_size": 63488 00:08:20.356 }, 00:08:20.356 { 00:08:20.356 "name": "BaseBdev2", 00:08:20.356 "uuid": "5b528f3c-5e80-5853-9dfb-ceff8b321ac2", 00:08:20.356 "is_configured": true, 00:08:20.356 "data_offset": 2048, 00:08:20.356 "data_size": 63488 00:08:20.356 }, 00:08:20.356 { 00:08:20.356 "name": "BaseBdev3", 00:08:20.356 "uuid": "1fcb09a6-994d-5d02-97b3-c5f1908e5690", 00:08:20.356 "is_configured": true, 00:08:20.356 "data_offset": 2048, 00:08:20.356 "data_size": 63488 00:08:20.356 } 00:08:20.356 ] 00:08:20.356 }' 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.356 21:40:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.616 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:20.616 21:40:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:20.876 [2024-11-27 21:40:43.763361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.827 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.828 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.828 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.828 21:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.828 21:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.828 21:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.828 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.828 "name": "raid_bdev1", 00:08:21.828 "uuid": "fb958183-7d6b-4394-89ac-a21c13f5df53", 00:08:21.828 "strip_size_kb": 64, 00:08:21.828 "state": "online", 00:08:21.828 "raid_level": "concat", 00:08:21.828 "superblock": true, 00:08:21.828 "num_base_bdevs": 3, 00:08:21.828 "num_base_bdevs_discovered": 3, 00:08:21.828 "num_base_bdevs_operational": 3, 00:08:21.828 "base_bdevs_list": [ 00:08:21.828 { 00:08:21.828 "name": "BaseBdev1", 00:08:21.828 "uuid": "6ca5af7e-3c2e-55c5-a41d-a99ebe872e8f", 00:08:21.828 "is_configured": true, 00:08:21.828 "data_offset": 2048, 00:08:21.828 "data_size": 63488 00:08:21.828 }, 00:08:21.828 { 00:08:21.828 "name": "BaseBdev2", 00:08:21.828 "uuid": "5b528f3c-5e80-5853-9dfb-ceff8b321ac2", 00:08:21.828 "is_configured": true, 00:08:21.828 "data_offset": 2048, 00:08:21.828 "data_size": 63488 00:08:21.828 }, 00:08:21.828 { 00:08:21.828 "name": "BaseBdev3", 00:08:21.828 "uuid": "1fcb09a6-994d-5d02-97b3-c5f1908e5690", 00:08:21.828 "is_configured": true, 00:08:21.828 "data_offset": 2048, 00:08:21.828 "data_size": 63488 00:08:21.828 } 00:08:21.828 ] 00:08:21.828 }' 00:08:21.828 21:40:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.828 21:40:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.102 21:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:22.102 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.102 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.102 [2024-11-27 21:40:45.123243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.102 [2024-11-27 21:40:45.123275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.102 [2024-11-27 21:40:45.125943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.102 [2024-11-27 21:40:45.125993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.102 [2024-11-27 21:40:45.126027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.102 [2024-11-27 21:40:45.126044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:22.102 { 00:08:22.102 "results": [ 00:08:22.102 { 00:08:22.102 "job": "raid_bdev1", 00:08:22.102 "core_mask": "0x1", 00:08:22.102 "workload": "randrw", 00:08:22.102 "percentage": 50, 00:08:22.102 "status": "finished", 00:08:22.102 "queue_depth": 1, 00:08:22.102 "io_size": 131072, 00:08:22.102 "runtime": 1.360636, 00:08:22.102 "iops": 16458.479710958698, 00:08:22.102 "mibps": 2057.3099638698372, 00:08:22.102 "io_failed": 1, 00:08:22.102 "io_timeout": 0, 00:08:22.102 "avg_latency_us": 83.91922105195424, 00:08:22.102 "min_latency_us": 25.041048034934498, 00:08:22.103 "max_latency_us": 1452.380786026201 00:08:22.103 } 00:08:22.103 ], 00:08:22.103 "core_count": 1 00:08:22.103 } 00:08:22.103 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.103 21:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 78059 00:08:22.103 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 78059 ']' 00:08:22.103 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 78059 00:08:22.103 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:22.103 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.103 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78059 00:08:22.103 killing process with pid 78059 00:08:22.103 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:22.103 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:22.103 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78059' 00:08:22.103 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 78059 00:08:22.103 [2024-11-27 21:40:45.162638] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.103 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 78059 00:08:22.103 [2024-11-27 21:40:45.187900] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.363 21:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.E0YrcYmP52 00:08:22.363 21:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:22.363 21:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:22.363 21:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:22.363 21:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:22.363 ************************************ 00:08:22.363 END TEST raid_write_error_test 00:08:22.363 ************************************ 00:08:22.363 21:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:22.363 21:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:22.363 21:40:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:22.363 00:08:22.363 real 0m3.225s 00:08:22.363 user 0m4.127s 00:08:22.363 sys 0m0.489s 00:08:22.363 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.363 21:40:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.363 21:40:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:22.363 21:40:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:08:22.363 21:40:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:22.363 21:40:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.363 21:40:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.363 ************************************ 00:08:22.363 START TEST raid_state_function_test 00:08:22.363 ************************************ 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:22.363 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:22.624 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=78186 00:08:22.624 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:22.624 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78186' 00:08:22.624 Process raid pid: 78186 00:08:22.624 21:40:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 78186 00:08:22.624 21:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 78186 ']' 00:08:22.624 21:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.624 21:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.624 21:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.624 21:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.624 21:40:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.624 [2024-11-27 21:40:45.562633] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:22.624 [2024-11-27 21:40:45.562855] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.624 [2024-11-27 21:40:45.715888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.624 [2024-11-27 21:40:45.740367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.883 [2024-11-27 21:40:45.781857] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.883 [2024-11-27 21:40:45.781889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.453 [2024-11-27 21:40:46.388312] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.453 [2024-11-27 21:40:46.388441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.453 [2024-11-27 21:40:46.388457] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.453 [2024-11-27 21:40:46.388466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.453 [2024-11-27 21:40:46.388472] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:23.453 [2024-11-27 21:40:46.388484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.453 "name": "Existed_Raid", 00:08:23.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.453 "strip_size_kb": 0, 00:08:23.453 "state": "configuring", 00:08:23.453 "raid_level": "raid1", 00:08:23.453 "superblock": false, 00:08:23.453 "num_base_bdevs": 3, 00:08:23.453 "num_base_bdevs_discovered": 0, 00:08:23.453 "num_base_bdevs_operational": 3, 00:08:23.453 "base_bdevs_list": [ 00:08:23.453 { 00:08:23.453 "name": "BaseBdev1", 00:08:23.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.453 "is_configured": false, 00:08:23.453 "data_offset": 0, 00:08:23.453 "data_size": 0 00:08:23.453 }, 00:08:23.453 { 00:08:23.453 "name": "BaseBdev2", 00:08:23.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.453 "is_configured": false, 00:08:23.453 "data_offset": 0, 00:08:23.453 "data_size": 0 00:08:23.453 }, 00:08:23.453 { 00:08:23.453 "name": "BaseBdev3", 00:08:23.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.453 "is_configured": false, 00:08:23.453 "data_offset": 0, 00:08:23.453 "data_size": 0 00:08:23.453 } 00:08:23.453 ] 00:08:23.453 }' 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.453 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.713 [2024-11-27 21:40:46.791572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.713 [2024-11-27 21:40:46.791649] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.713 [2024-11-27 21:40:46.799585] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.713 [2024-11-27 21:40:46.799658] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.713 [2024-11-27 21:40:46.799685] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.713 [2024-11-27 21:40:46.799707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.713 [2024-11-27 21:40:46.799725] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:23.713 [2024-11-27 21:40:46.799745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.713 [2024-11-27 21:40:46.820148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.713 BaseBdev1 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.713 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.973 [ 00:08:23.973 { 00:08:23.973 "name": "BaseBdev1", 00:08:23.973 "aliases": [ 00:08:23.973 "e31c66dd-ba2a-47e4-9e90-b77ea33ed279" 00:08:23.973 ], 00:08:23.973 "product_name": "Malloc disk", 00:08:23.973 "block_size": 512, 00:08:23.973 "num_blocks": 65536, 00:08:23.973 "uuid": "e31c66dd-ba2a-47e4-9e90-b77ea33ed279", 00:08:23.973 "assigned_rate_limits": { 00:08:23.973 "rw_ios_per_sec": 0, 00:08:23.973 "rw_mbytes_per_sec": 0, 00:08:23.973 "r_mbytes_per_sec": 0, 00:08:23.973 "w_mbytes_per_sec": 0 00:08:23.973 }, 00:08:23.973 "claimed": true, 00:08:23.973 "claim_type": "exclusive_write", 00:08:23.973 "zoned": false, 00:08:23.973 "supported_io_types": { 00:08:23.973 "read": true, 00:08:23.973 "write": true, 00:08:23.973 "unmap": true, 00:08:23.973 "flush": true, 00:08:23.973 "reset": true, 00:08:23.973 "nvme_admin": false, 00:08:23.973 "nvme_io": false, 00:08:23.973 "nvme_io_md": false, 00:08:23.973 "write_zeroes": true, 00:08:23.973 "zcopy": true, 00:08:23.973 "get_zone_info": false, 00:08:23.973 "zone_management": false, 00:08:23.973 "zone_append": false, 00:08:23.973 "compare": false, 00:08:23.973 "compare_and_write": false, 00:08:23.973 "abort": true, 00:08:23.973 "seek_hole": false, 00:08:23.973 "seek_data": false, 00:08:23.973 "copy": true, 00:08:23.973 "nvme_iov_md": false 00:08:23.973 }, 00:08:23.973 "memory_domains": [ 00:08:23.973 { 00:08:23.973 "dma_device_id": "system", 00:08:23.973 "dma_device_type": 1 00:08:23.973 }, 00:08:23.973 { 00:08:23.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.973 "dma_device_type": 2 00:08:23.973 } 00:08:23.973 ], 00:08:23.973 "driver_specific": {} 00:08:23.973 } 00:08:23.973 ] 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.973 "name": "Existed_Raid", 00:08:23.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.973 "strip_size_kb": 0, 00:08:23.973 "state": "configuring", 00:08:23.973 "raid_level": "raid1", 00:08:23.973 "superblock": false, 00:08:23.973 "num_base_bdevs": 3, 00:08:23.973 "num_base_bdevs_discovered": 1, 00:08:23.973 "num_base_bdevs_operational": 3, 00:08:23.973 "base_bdevs_list": [ 00:08:23.973 { 00:08:23.973 "name": "BaseBdev1", 00:08:23.973 "uuid": "e31c66dd-ba2a-47e4-9e90-b77ea33ed279", 00:08:23.973 "is_configured": true, 00:08:23.973 "data_offset": 0, 00:08:23.973 "data_size": 65536 00:08:23.973 }, 00:08:23.973 { 00:08:23.973 "name": "BaseBdev2", 00:08:23.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.973 "is_configured": false, 00:08:23.973 "data_offset": 0, 00:08:23.973 "data_size": 0 00:08:23.973 }, 00:08:23.973 { 00:08:23.973 "name": "BaseBdev3", 00:08:23.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.973 "is_configured": false, 00:08:23.973 "data_offset": 0, 00:08:23.973 "data_size": 0 00:08:23.973 } 00:08:23.973 ] 00:08:23.973 }' 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.973 21:40:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.233 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:24.233 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.233 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.233 [2024-11-27 21:40:47.283369] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:24.233 [2024-11-27 21:40:47.283461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:24.233 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.233 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:24.233 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.233 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.233 [2024-11-27 21:40:47.295362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.233 [2024-11-27 21:40:47.297263] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.233 [2024-11-27 21:40:47.297357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.233 [2024-11-27 21:40:47.297380] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:24.233 [2024-11-27 21:40:47.297392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:24.233 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.233 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:24.233 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:24.233 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:24.233 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.233 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.233 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.234 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.234 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.234 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.234 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.234 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.234 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.234 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.234 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.234 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.234 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.234 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.234 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.234 "name": "Existed_Raid", 00:08:24.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.234 "strip_size_kb": 0, 00:08:24.234 "state": "configuring", 00:08:24.234 "raid_level": "raid1", 00:08:24.234 "superblock": false, 00:08:24.234 "num_base_bdevs": 3, 00:08:24.234 "num_base_bdevs_discovered": 1, 00:08:24.234 "num_base_bdevs_operational": 3, 00:08:24.234 "base_bdevs_list": [ 00:08:24.234 { 00:08:24.234 "name": "BaseBdev1", 00:08:24.234 "uuid": "e31c66dd-ba2a-47e4-9e90-b77ea33ed279", 00:08:24.234 "is_configured": true, 00:08:24.234 "data_offset": 0, 00:08:24.234 "data_size": 65536 00:08:24.234 }, 00:08:24.234 { 00:08:24.234 "name": "BaseBdev2", 00:08:24.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.234 "is_configured": false, 00:08:24.234 "data_offset": 0, 00:08:24.234 "data_size": 0 00:08:24.234 }, 00:08:24.234 { 00:08:24.234 "name": "BaseBdev3", 00:08:24.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.234 "is_configured": false, 00:08:24.234 "data_offset": 0, 00:08:24.234 "data_size": 0 00:08:24.234 } 00:08:24.234 ] 00:08:24.234 }' 00:08:24.234 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.234 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.804 [2024-11-27 21:40:47.713428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:24.804 BaseBdev2 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.804 [ 00:08:24.804 { 00:08:24.804 "name": "BaseBdev2", 00:08:24.804 "aliases": [ 00:08:24.804 "63793cde-1fef-4ab5-a3ea-c187b73459ad" 00:08:24.804 ], 00:08:24.804 "product_name": "Malloc disk", 00:08:24.804 "block_size": 512, 00:08:24.804 "num_blocks": 65536, 00:08:24.804 "uuid": "63793cde-1fef-4ab5-a3ea-c187b73459ad", 00:08:24.804 "assigned_rate_limits": { 00:08:24.804 "rw_ios_per_sec": 0, 00:08:24.804 "rw_mbytes_per_sec": 0, 00:08:24.804 "r_mbytes_per_sec": 0, 00:08:24.804 "w_mbytes_per_sec": 0 00:08:24.804 }, 00:08:24.804 "claimed": true, 00:08:24.804 "claim_type": "exclusive_write", 00:08:24.804 "zoned": false, 00:08:24.804 "supported_io_types": { 00:08:24.804 "read": true, 00:08:24.804 "write": true, 00:08:24.804 "unmap": true, 00:08:24.804 "flush": true, 00:08:24.804 "reset": true, 00:08:24.804 "nvme_admin": false, 00:08:24.804 "nvme_io": false, 00:08:24.804 "nvme_io_md": false, 00:08:24.804 "write_zeroes": true, 00:08:24.804 "zcopy": true, 00:08:24.804 "get_zone_info": false, 00:08:24.804 "zone_management": false, 00:08:24.804 "zone_append": false, 00:08:24.804 "compare": false, 00:08:24.804 "compare_and_write": false, 00:08:24.804 "abort": true, 00:08:24.804 "seek_hole": false, 00:08:24.804 "seek_data": false, 00:08:24.804 "copy": true, 00:08:24.804 "nvme_iov_md": false 00:08:24.804 }, 00:08:24.804 "memory_domains": [ 00:08:24.804 { 00:08:24.804 "dma_device_id": "system", 00:08:24.804 "dma_device_type": 1 00:08:24.804 }, 00:08:24.804 { 00:08:24.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.804 "dma_device_type": 2 00:08:24.804 } 00:08:24.804 ], 00:08:24.804 "driver_specific": {} 00:08:24.804 } 00:08:24.804 ] 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:24.804 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.805 "name": "Existed_Raid", 00:08:24.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.805 "strip_size_kb": 0, 00:08:24.805 "state": "configuring", 00:08:24.805 "raid_level": "raid1", 00:08:24.805 "superblock": false, 00:08:24.805 "num_base_bdevs": 3, 00:08:24.805 "num_base_bdevs_discovered": 2, 00:08:24.805 "num_base_bdevs_operational": 3, 00:08:24.805 "base_bdevs_list": [ 00:08:24.805 { 00:08:24.805 "name": "BaseBdev1", 00:08:24.805 "uuid": "e31c66dd-ba2a-47e4-9e90-b77ea33ed279", 00:08:24.805 "is_configured": true, 00:08:24.805 "data_offset": 0, 00:08:24.805 "data_size": 65536 00:08:24.805 }, 00:08:24.805 { 00:08:24.805 "name": "BaseBdev2", 00:08:24.805 "uuid": "63793cde-1fef-4ab5-a3ea-c187b73459ad", 00:08:24.805 "is_configured": true, 00:08:24.805 "data_offset": 0, 00:08:24.805 "data_size": 65536 00:08:24.805 }, 00:08:24.805 { 00:08:24.805 "name": "BaseBdev3", 00:08:24.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.805 "is_configured": false, 00:08:24.805 "data_offset": 0, 00:08:24.805 "data_size": 0 00:08:24.805 } 00:08:24.805 ] 00:08:24.805 }' 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.805 21:40:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.065 [2024-11-27 21:40:48.167968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:25.065 [2024-11-27 21:40:48.168019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:25.065 [2024-11-27 21:40:48.168032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:25.065 [2024-11-27 21:40:48.168399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:25.065 [2024-11-27 21:40:48.168627] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:25.065 [2024-11-27 21:40:48.168649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:25.065 [2024-11-27 21:40:48.168934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.065 BaseBdev3 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.065 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.324 [ 00:08:25.324 { 00:08:25.324 "name": "BaseBdev3", 00:08:25.324 "aliases": [ 00:08:25.324 "0372b328-1dbf-4541-b454-1390bba2c8fb" 00:08:25.324 ], 00:08:25.324 "product_name": "Malloc disk", 00:08:25.324 "block_size": 512, 00:08:25.324 "num_blocks": 65536, 00:08:25.325 "uuid": "0372b328-1dbf-4541-b454-1390bba2c8fb", 00:08:25.325 "assigned_rate_limits": { 00:08:25.325 "rw_ios_per_sec": 0, 00:08:25.325 "rw_mbytes_per_sec": 0, 00:08:25.325 "r_mbytes_per_sec": 0, 00:08:25.325 "w_mbytes_per_sec": 0 00:08:25.325 }, 00:08:25.325 "claimed": true, 00:08:25.325 "claim_type": "exclusive_write", 00:08:25.325 "zoned": false, 00:08:25.325 "supported_io_types": { 00:08:25.325 "read": true, 00:08:25.325 "write": true, 00:08:25.325 "unmap": true, 00:08:25.325 "flush": true, 00:08:25.325 "reset": true, 00:08:25.325 "nvme_admin": false, 00:08:25.325 "nvme_io": false, 00:08:25.325 "nvme_io_md": false, 00:08:25.325 "write_zeroes": true, 00:08:25.325 "zcopy": true, 00:08:25.325 "get_zone_info": false, 00:08:25.325 "zone_management": false, 00:08:25.325 "zone_append": false, 00:08:25.325 "compare": false, 00:08:25.325 "compare_and_write": false, 00:08:25.325 "abort": true, 00:08:25.325 "seek_hole": false, 00:08:25.325 "seek_data": false, 00:08:25.325 "copy": true, 00:08:25.325 "nvme_iov_md": false 00:08:25.325 }, 00:08:25.325 "memory_domains": [ 00:08:25.325 { 00:08:25.325 "dma_device_id": "system", 00:08:25.325 "dma_device_type": 1 00:08:25.325 }, 00:08:25.325 { 00:08:25.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.325 "dma_device_type": 2 00:08:25.325 } 00:08:25.325 ], 00:08:25.325 "driver_specific": {} 00:08:25.325 } 00:08:25.325 ] 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.325 "name": "Existed_Raid", 00:08:25.325 "uuid": "4da3e37c-44f2-4fe6-8e39-c7608c9a2270", 00:08:25.325 "strip_size_kb": 0, 00:08:25.325 "state": "online", 00:08:25.325 "raid_level": "raid1", 00:08:25.325 "superblock": false, 00:08:25.325 "num_base_bdevs": 3, 00:08:25.325 "num_base_bdevs_discovered": 3, 00:08:25.325 "num_base_bdevs_operational": 3, 00:08:25.325 "base_bdevs_list": [ 00:08:25.325 { 00:08:25.325 "name": "BaseBdev1", 00:08:25.325 "uuid": "e31c66dd-ba2a-47e4-9e90-b77ea33ed279", 00:08:25.325 "is_configured": true, 00:08:25.325 "data_offset": 0, 00:08:25.325 "data_size": 65536 00:08:25.325 }, 00:08:25.325 { 00:08:25.325 "name": "BaseBdev2", 00:08:25.325 "uuid": "63793cde-1fef-4ab5-a3ea-c187b73459ad", 00:08:25.325 "is_configured": true, 00:08:25.325 "data_offset": 0, 00:08:25.325 "data_size": 65536 00:08:25.325 }, 00:08:25.325 { 00:08:25.325 "name": "BaseBdev3", 00:08:25.325 "uuid": "0372b328-1dbf-4541-b454-1390bba2c8fb", 00:08:25.325 "is_configured": true, 00:08:25.325 "data_offset": 0, 00:08:25.325 "data_size": 65536 00:08:25.325 } 00:08:25.325 ] 00:08:25.325 }' 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.325 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.585 [2024-11-27 21:40:48.599514] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:25.585 "name": "Existed_Raid", 00:08:25.585 "aliases": [ 00:08:25.585 "4da3e37c-44f2-4fe6-8e39-c7608c9a2270" 00:08:25.585 ], 00:08:25.585 "product_name": "Raid Volume", 00:08:25.585 "block_size": 512, 00:08:25.585 "num_blocks": 65536, 00:08:25.585 "uuid": "4da3e37c-44f2-4fe6-8e39-c7608c9a2270", 00:08:25.585 "assigned_rate_limits": { 00:08:25.585 "rw_ios_per_sec": 0, 00:08:25.585 "rw_mbytes_per_sec": 0, 00:08:25.585 "r_mbytes_per_sec": 0, 00:08:25.585 "w_mbytes_per_sec": 0 00:08:25.585 }, 00:08:25.585 "claimed": false, 00:08:25.585 "zoned": false, 00:08:25.585 "supported_io_types": { 00:08:25.585 "read": true, 00:08:25.585 "write": true, 00:08:25.585 "unmap": false, 00:08:25.585 "flush": false, 00:08:25.585 "reset": true, 00:08:25.585 "nvme_admin": false, 00:08:25.585 "nvme_io": false, 00:08:25.585 "nvme_io_md": false, 00:08:25.585 "write_zeroes": true, 00:08:25.585 "zcopy": false, 00:08:25.585 "get_zone_info": false, 00:08:25.585 "zone_management": false, 00:08:25.585 "zone_append": false, 00:08:25.585 "compare": false, 00:08:25.585 "compare_and_write": false, 00:08:25.585 "abort": false, 00:08:25.585 "seek_hole": false, 00:08:25.585 "seek_data": false, 00:08:25.585 "copy": false, 00:08:25.585 "nvme_iov_md": false 00:08:25.585 }, 00:08:25.585 "memory_domains": [ 00:08:25.585 { 00:08:25.585 "dma_device_id": "system", 00:08:25.585 "dma_device_type": 1 00:08:25.585 }, 00:08:25.585 { 00:08:25.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.585 "dma_device_type": 2 00:08:25.585 }, 00:08:25.585 { 00:08:25.585 "dma_device_id": "system", 00:08:25.585 "dma_device_type": 1 00:08:25.585 }, 00:08:25.585 { 00:08:25.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.585 "dma_device_type": 2 00:08:25.585 }, 00:08:25.585 { 00:08:25.585 "dma_device_id": "system", 00:08:25.585 "dma_device_type": 1 00:08:25.585 }, 00:08:25.585 { 00:08:25.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:25.585 "dma_device_type": 2 00:08:25.585 } 00:08:25.585 ], 00:08:25.585 "driver_specific": { 00:08:25.585 "raid": { 00:08:25.585 "uuid": "4da3e37c-44f2-4fe6-8e39-c7608c9a2270", 00:08:25.585 "strip_size_kb": 0, 00:08:25.585 "state": "online", 00:08:25.585 "raid_level": "raid1", 00:08:25.585 "superblock": false, 00:08:25.585 "num_base_bdevs": 3, 00:08:25.585 "num_base_bdevs_discovered": 3, 00:08:25.585 "num_base_bdevs_operational": 3, 00:08:25.585 "base_bdevs_list": [ 00:08:25.585 { 00:08:25.585 "name": "BaseBdev1", 00:08:25.585 "uuid": "e31c66dd-ba2a-47e4-9e90-b77ea33ed279", 00:08:25.585 "is_configured": true, 00:08:25.585 "data_offset": 0, 00:08:25.585 "data_size": 65536 00:08:25.585 }, 00:08:25.585 { 00:08:25.585 "name": "BaseBdev2", 00:08:25.585 "uuid": "63793cde-1fef-4ab5-a3ea-c187b73459ad", 00:08:25.585 "is_configured": true, 00:08:25.585 "data_offset": 0, 00:08:25.585 "data_size": 65536 00:08:25.585 }, 00:08:25.585 { 00:08:25.585 "name": "BaseBdev3", 00:08:25.585 "uuid": "0372b328-1dbf-4541-b454-1390bba2c8fb", 00:08:25.585 "is_configured": true, 00:08:25.585 "data_offset": 0, 00:08:25.585 "data_size": 65536 00:08:25.585 } 00:08:25.585 ] 00:08:25.585 } 00:08:25.585 } 00:08:25.585 }' 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:25.585 BaseBdev2 00:08:25.585 BaseBdev3' 00:08:25.585 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.846 [2024-11-27 21:40:48.846861] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.846 "name": "Existed_Raid", 00:08:25.846 "uuid": "4da3e37c-44f2-4fe6-8e39-c7608c9a2270", 00:08:25.846 "strip_size_kb": 0, 00:08:25.846 "state": "online", 00:08:25.846 "raid_level": "raid1", 00:08:25.846 "superblock": false, 00:08:25.846 "num_base_bdevs": 3, 00:08:25.846 "num_base_bdevs_discovered": 2, 00:08:25.846 "num_base_bdevs_operational": 2, 00:08:25.846 "base_bdevs_list": [ 00:08:25.846 { 00:08:25.846 "name": null, 00:08:25.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.846 "is_configured": false, 00:08:25.846 "data_offset": 0, 00:08:25.846 "data_size": 65536 00:08:25.846 }, 00:08:25.846 { 00:08:25.846 "name": "BaseBdev2", 00:08:25.846 "uuid": "63793cde-1fef-4ab5-a3ea-c187b73459ad", 00:08:25.846 "is_configured": true, 00:08:25.846 "data_offset": 0, 00:08:25.846 "data_size": 65536 00:08:25.846 }, 00:08:25.846 { 00:08:25.846 "name": "BaseBdev3", 00:08:25.846 "uuid": "0372b328-1dbf-4541-b454-1390bba2c8fb", 00:08:25.846 "is_configured": true, 00:08:25.846 "data_offset": 0, 00:08:25.846 "data_size": 65536 00:08:25.846 } 00:08:25.846 ] 00:08:25.846 }' 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.846 21:40:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.416 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:26.416 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.416 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.416 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.416 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.417 [2024-11-27 21:40:49.337123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.417 [2024-11-27 21:40:49.396035] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:26.417 [2024-11-27 21:40:49.396130] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:26.417 [2024-11-27 21:40:49.407487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:26.417 [2024-11-27 21:40:49.407533] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:26.417 [2024-11-27 21:40:49.407547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.417 BaseBdev2 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.417 [ 00:08:26.417 { 00:08:26.417 "name": "BaseBdev2", 00:08:26.417 "aliases": [ 00:08:26.417 "ff105032-ba51-4b94-b689-e77fc4da599e" 00:08:26.417 ], 00:08:26.417 "product_name": "Malloc disk", 00:08:26.417 "block_size": 512, 00:08:26.417 "num_blocks": 65536, 00:08:26.417 "uuid": "ff105032-ba51-4b94-b689-e77fc4da599e", 00:08:26.417 "assigned_rate_limits": { 00:08:26.417 "rw_ios_per_sec": 0, 00:08:26.417 "rw_mbytes_per_sec": 0, 00:08:26.417 "r_mbytes_per_sec": 0, 00:08:26.417 "w_mbytes_per_sec": 0 00:08:26.417 }, 00:08:26.417 "claimed": false, 00:08:26.417 "zoned": false, 00:08:26.417 "supported_io_types": { 00:08:26.417 "read": true, 00:08:26.417 "write": true, 00:08:26.417 "unmap": true, 00:08:26.417 "flush": true, 00:08:26.417 "reset": true, 00:08:26.417 "nvme_admin": false, 00:08:26.417 "nvme_io": false, 00:08:26.417 "nvme_io_md": false, 00:08:26.417 "write_zeroes": true, 00:08:26.417 "zcopy": true, 00:08:26.417 "get_zone_info": false, 00:08:26.417 "zone_management": false, 00:08:26.417 "zone_append": false, 00:08:26.417 "compare": false, 00:08:26.417 "compare_and_write": false, 00:08:26.417 "abort": true, 00:08:26.417 "seek_hole": false, 00:08:26.417 "seek_data": false, 00:08:26.417 "copy": true, 00:08:26.417 "nvme_iov_md": false 00:08:26.417 }, 00:08:26.417 "memory_domains": [ 00:08:26.417 { 00:08:26.417 "dma_device_id": "system", 00:08:26.417 "dma_device_type": 1 00:08:26.417 }, 00:08:26.417 { 00:08:26.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.417 "dma_device_type": 2 00:08:26.417 } 00:08:26.417 ], 00:08:26.417 "driver_specific": {} 00:08:26.417 } 00:08:26.417 ] 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:26.417 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:26.418 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.418 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.418 BaseBdev3 00:08:26.418 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.418 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:26.418 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:26.418 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.418 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:26.418 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.418 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.418 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:26.418 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.418 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.677 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.677 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:26.677 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.677 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.677 [ 00:08:26.677 { 00:08:26.677 "name": "BaseBdev3", 00:08:26.677 "aliases": [ 00:08:26.677 "b29eba8c-5dee-487c-8554-4bc2d2860407" 00:08:26.677 ], 00:08:26.677 "product_name": "Malloc disk", 00:08:26.677 "block_size": 512, 00:08:26.677 "num_blocks": 65536, 00:08:26.677 "uuid": "b29eba8c-5dee-487c-8554-4bc2d2860407", 00:08:26.677 "assigned_rate_limits": { 00:08:26.677 "rw_ios_per_sec": 0, 00:08:26.677 "rw_mbytes_per_sec": 0, 00:08:26.677 "r_mbytes_per_sec": 0, 00:08:26.677 "w_mbytes_per_sec": 0 00:08:26.677 }, 00:08:26.677 "claimed": false, 00:08:26.677 "zoned": false, 00:08:26.677 "supported_io_types": { 00:08:26.677 "read": true, 00:08:26.677 "write": true, 00:08:26.677 "unmap": true, 00:08:26.677 "flush": true, 00:08:26.677 "reset": true, 00:08:26.677 "nvme_admin": false, 00:08:26.677 "nvme_io": false, 00:08:26.677 "nvme_io_md": false, 00:08:26.677 "write_zeroes": true, 00:08:26.677 "zcopy": true, 00:08:26.677 "get_zone_info": false, 00:08:26.677 "zone_management": false, 00:08:26.677 "zone_append": false, 00:08:26.677 "compare": false, 00:08:26.677 "compare_and_write": false, 00:08:26.677 "abort": true, 00:08:26.678 "seek_hole": false, 00:08:26.678 "seek_data": false, 00:08:26.678 "copy": true, 00:08:26.678 "nvme_iov_md": false 00:08:26.678 }, 00:08:26.678 "memory_domains": [ 00:08:26.678 { 00:08:26.678 "dma_device_id": "system", 00:08:26.678 "dma_device_type": 1 00:08:26.678 }, 00:08:26.678 { 00:08:26.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.678 "dma_device_type": 2 00:08:26.678 } 00:08:26.678 ], 00:08:26.678 "driver_specific": {} 00:08:26.678 } 00:08:26.678 ] 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.678 [2024-11-27 21:40:49.570414] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:26.678 [2024-11-27 21:40:49.570508] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:26.678 [2024-11-27 21:40:49.570551] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.678 [2024-11-27 21:40:49.572407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.678 "name": "Existed_Raid", 00:08:26.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.678 "strip_size_kb": 0, 00:08:26.678 "state": "configuring", 00:08:26.678 "raid_level": "raid1", 00:08:26.678 "superblock": false, 00:08:26.678 "num_base_bdevs": 3, 00:08:26.678 "num_base_bdevs_discovered": 2, 00:08:26.678 "num_base_bdevs_operational": 3, 00:08:26.678 "base_bdevs_list": [ 00:08:26.678 { 00:08:26.678 "name": "BaseBdev1", 00:08:26.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.678 "is_configured": false, 00:08:26.678 "data_offset": 0, 00:08:26.678 "data_size": 0 00:08:26.678 }, 00:08:26.678 { 00:08:26.678 "name": "BaseBdev2", 00:08:26.678 "uuid": "ff105032-ba51-4b94-b689-e77fc4da599e", 00:08:26.678 "is_configured": true, 00:08:26.678 "data_offset": 0, 00:08:26.678 "data_size": 65536 00:08:26.678 }, 00:08:26.678 { 00:08:26.678 "name": "BaseBdev3", 00:08:26.678 "uuid": "b29eba8c-5dee-487c-8554-4bc2d2860407", 00:08:26.678 "is_configured": true, 00:08:26.678 "data_offset": 0, 00:08:26.678 "data_size": 65536 00:08:26.678 } 00:08:26.678 ] 00:08:26.678 }' 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.678 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.938 [2024-11-27 21:40:49.965739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.938 21:40:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.939 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.939 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.939 21:40:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.939 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.939 "name": "Existed_Raid", 00:08:26.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.939 "strip_size_kb": 0, 00:08:26.939 "state": "configuring", 00:08:26.939 "raid_level": "raid1", 00:08:26.939 "superblock": false, 00:08:26.939 "num_base_bdevs": 3, 00:08:26.939 "num_base_bdevs_discovered": 1, 00:08:26.939 "num_base_bdevs_operational": 3, 00:08:26.939 "base_bdevs_list": [ 00:08:26.939 { 00:08:26.939 "name": "BaseBdev1", 00:08:26.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.939 "is_configured": false, 00:08:26.939 "data_offset": 0, 00:08:26.939 "data_size": 0 00:08:26.939 }, 00:08:26.939 { 00:08:26.939 "name": null, 00:08:26.939 "uuid": "ff105032-ba51-4b94-b689-e77fc4da599e", 00:08:26.939 "is_configured": false, 00:08:26.939 "data_offset": 0, 00:08:26.939 "data_size": 65536 00:08:26.939 }, 00:08:26.939 { 00:08:26.939 "name": "BaseBdev3", 00:08:26.939 "uuid": "b29eba8c-5dee-487c-8554-4bc2d2860407", 00:08:26.939 "is_configured": true, 00:08:26.939 "data_offset": 0, 00:08:26.939 "data_size": 65536 00:08:26.939 } 00:08:26.939 ] 00:08:26.939 }' 00:08:26.939 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.939 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.509 [2024-11-27 21:40:50.443682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.509 BaseBdev1 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.509 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.509 [ 00:08:27.509 { 00:08:27.509 "name": "BaseBdev1", 00:08:27.509 "aliases": [ 00:08:27.509 "4c902299-df77-4cda-954f-a1ec19a4f59e" 00:08:27.509 ], 00:08:27.509 "product_name": "Malloc disk", 00:08:27.509 "block_size": 512, 00:08:27.509 "num_blocks": 65536, 00:08:27.509 "uuid": "4c902299-df77-4cda-954f-a1ec19a4f59e", 00:08:27.509 "assigned_rate_limits": { 00:08:27.509 "rw_ios_per_sec": 0, 00:08:27.509 "rw_mbytes_per_sec": 0, 00:08:27.509 "r_mbytes_per_sec": 0, 00:08:27.509 "w_mbytes_per_sec": 0 00:08:27.509 }, 00:08:27.509 "claimed": true, 00:08:27.509 "claim_type": "exclusive_write", 00:08:27.509 "zoned": false, 00:08:27.509 "supported_io_types": { 00:08:27.509 "read": true, 00:08:27.509 "write": true, 00:08:27.509 "unmap": true, 00:08:27.509 "flush": true, 00:08:27.509 "reset": true, 00:08:27.509 "nvme_admin": false, 00:08:27.509 "nvme_io": false, 00:08:27.509 "nvme_io_md": false, 00:08:27.509 "write_zeroes": true, 00:08:27.509 "zcopy": true, 00:08:27.509 "get_zone_info": false, 00:08:27.509 "zone_management": false, 00:08:27.509 "zone_append": false, 00:08:27.509 "compare": false, 00:08:27.509 "compare_and_write": false, 00:08:27.509 "abort": true, 00:08:27.509 "seek_hole": false, 00:08:27.509 "seek_data": false, 00:08:27.509 "copy": true, 00:08:27.509 "nvme_iov_md": false 00:08:27.509 }, 00:08:27.509 "memory_domains": [ 00:08:27.509 { 00:08:27.509 "dma_device_id": "system", 00:08:27.509 "dma_device_type": 1 00:08:27.509 }, 00:08:27.510 { 00:08:27.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.510 "dma_device_type": 2 00:08:27.510 } 00:08:27.510 ], 00:08:27.510 "driver_specific": {} 00:08:27.510 } 00:08:27.510 ] 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.510 "name": "Existed_Raid", 00:08:27.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.510 "strip_size_kb": 0, 00:08:27.510 "state": "configuring", 00:08:27.510 "raid_level": "raid1", 00:08:27.510 "superblock": false, 00:08:27.510 "num_base_bdevs": 3, 00:08:27.510 "num_base_bdevs_discovered": 2, 00:08:27.510 "num_base_bdevs_operational": 3, 00:08:27.510 "base_bdevs_list": [ 00:08:27.510 { 00:08:27.510 "name": "BaseBdev1", 00:08:27.510 "uuid": "4c902299-df77-4cda-954f-a1ec19a4f59e", 00:08:27.510 "is_configured": true, 00:08:27.510 "data_offset": 0, 00:08:27.510 "data_size": 65536 00:08:27.510 }, 00:08:27.510 { 00:08:27.510 "name": null, 00:08:27.510 "uuid": "ff105032-ba51-4b94-b689-e77fc4da599e", 00:08:27.510 "is_configured": false, 00:08:27.510 "data_offset": 0, 00:08:27.510 "data_size": 65536 00:08:27.510 }, 00:08:27.510 { 00:08:27.510 "name": "BaseBdev3", 00:08:27.510 "uuid": "b29eba8c-5dee-487c-8554-4bc2d2860407", 00:08:27.510 "is_configured": true, 00:08:27.510 "data_offset": 0, 00:08:27.510 "data_size": 65536 00:08:27.510 } 00:08:27.510 ] 00:08:27.510 }' 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.510 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.079 [2024-11-27 21:40:50.942898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.079 "name": "Existed_Raid", 00:08:28.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.079 "strip_size_kb": 0, 00:08:28.079 "state": "configuring", 00:08:28.079 "raid_level": "raid1", 00:08:28.079 "superblock": false, 00:08:28.079 "num_base_bdevs": 3, 00:08:28.079 "num_base_bdevs_discovered": 1, 00:08:28.079 "num_base_bdevs_operational": 3, 00:08:28.079 "base_bdevs_list": [ 00:08:28.079 { 00:08:28.079 "name": "BaseBdev1", 00:08:28.079 "uuid": "4c902299-df77-4cda-954f-a1ec19a4f59e", 00:08:28.079 "is_configured": true, 00:08:28.079 "data_offset": 0, 00:08:28.079 "data_size": 65536 00:08:28.079 }, 00:08:28.079 { 00:08:28.079 "name": null, 00:08:28.079 "uuid": "ff105032-ba51-4b94-b689-e77fc4da599e", 00:08:28.079 "is_configured": false, 00:08:28.079 "data_offset": 0, 00:08:28.079 "data_size": 65536 00:08:28.079 }, 00:08:28.079 { 00:08:28.079 "name": null, 00:08:28.079 "uuid": "b29eba8c-5dee-487c-8554-4bc2d2860407", 00:08:28.079 "is_configured": false, 00:08:28.079 "data_offset": 0, 00:08:28.079 "data_size": 65536 00:08:28.079 } 00:08:28.079 ] 00:08:28.079 }' 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.079 21:40:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.340 [2024-11-27 21:40:51.410123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.340 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.601 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.601 "name": "Existed_Raid", 00:08:28.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.601 "strip_size_kb": 0, 00:08:28.601 "state": "configuring", 00:08:28.601 "raid_level": "raid1", 00:08:28.601 "superblock": false, 00:08:28.601 "num_base_bdevs": 3, 00:08:28.601 "num_base_bdevs_discovered": 2, 00:08:28.601 "num_base_bdevs_operational": 3, 00:08:28.601 "base_bdevs_list": [ 00:08:28.601 { 00:08:28.601 "name": "BaseBdev1", 00:08:28.601 "uuid": "4c902299-df77-4cda-954f-a1ec19a4f59e", 00:08:28.601 "is_configured": true, 00:08:28.601 "data_offset": 0, 00:08:28.601 "data_size": 65536 00:08:28.601 }, 00:08:28.601 { 00:08:28.601 "name": null, 00:08:28.601 "uuid": "ff105032-ba51-4b94-b689-e77fc4da599e", 00:08:28.601 "is_configured": false, 00:08:28.601 "data_offset": 0, 00:08:28.601 "data_size": 65536 00:08:28.601 }, 00:08:28.601 { 00:08:28.601 "name": "BaseBdev3", 00:08:28.601 "uuid": "b29eba8c-5dee-487c-8554-4bc2d2860407", 00:08:28.601 "is_configured": true, 00:08:28.601 "data_offset": 0, 00:08:28.601 "data_size": 65536 00:08:28.601 } 00:08:28.601 ] 00:08:28.601 }' 00:08:28.601 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.601 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.861 [2024-11-27 21:40:51.881355] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.861 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.862 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:28.862 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.862 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.862 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.862 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.862 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.862 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.862 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.862 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.862 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.862 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.862 "name": "Existed_Raid", 00:08:28.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.862 "strip_size_kb": 0, 00:08:28.862 "state": "configuring", 00:08:28.862 "raid_level": "raid1", 00:08:28.862 "superblock": false, 00:08:28.862 "num_base_bdevs": 3, 00:08:28.862 "num_base_bdevs_discovered": 1, 00:08:28.862 "num_base_bdevs_operational": 3, 00:08:28.862 "base_bdevs_list": [ 00:08:28.862 { 00:08:28.862 "name": null, 00:08:28.862 "uuid": "4c902299-df77-4cda-954f-a1ec19a4f59e", 00:08:28.862 "is_configured": false, 00:08:28.862 "data_offset": 0, 00:08:28.862 "data_size": 65536 00:08:28.862 }, 00:08:28.862 { 00:08:28.862 "name": null, 00:08:28.862 "uuid": "ff105032-ba51-4b94-b689-e77fc4da599e", 00:08:28.862 "is_configured": false, 00:08:28.862 "data_offset": 0, 00:08:28.862 "data_size": 65536 00:08:28.862 }, 00:08:28.862 { 00:08:28.862 "name": "BaseBdev3", 00:08:28.862 "uuid": "b29eba8c-5dee-487c-8554-4bc2d2860407", 00:08:28.862 "is_configured": true, 00:08:28.862 "data_offset": 0, 00:08:28.862 "data_size": 65536 00:08:28.862 } 00:08:28.862 ] 00:08:28.862 }' 00:08:28.862 21:40:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.862 21:40:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.432 [2024-11-27 21:40:52.358915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.432 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.432 "name": "Existed_Raid", 00:08:29.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.432 "strip_size_kb": 0, 00:08:29.432 "state": "configuring", 00:08:29.432 "raid_level": "raid1", 00:08:29.432 "superblock": false, 00:08:29.432 "num_base_bdevs": 3, 00:08:29.432 "num_base_bdevs_discovered": 2, 00:08:29.432 "num_base_bdevs_operational": 3, 00:08:29.432 "base_bdevs_list": [ 00:08:29.432 { 00:08:29.432 "name": null, 00:08:29.432 "uuid": "4c902299-df77-4cda-954f-a1ec19a4f59e", 00:08:29.432 "is_configured": false, 00:08:29.432 "data_offset": 0, 00:08:29.432 "data_size": 65536 00:08:29.432 }, 00:08:29.432 { 00:08:29.432 "name": "BaseBdev2", 00:08:29.432 "uuid": "ff105032-ba51-4b94-b689-e77fc4da599e", 00:08:29.432 "is_configured": true, 00:08:29.432 "data_offset": 0, 00:08:29.432 "data_size": 65536 00:08:29.432 }, 00:08:29.432 { 00:08:29.432 "name": "BaseBdev3", 00:08:29.432 "uuid": "b29eba8c-5dee-487c-8554-4bc2d2860407", 00:08:29.432 "is_configured": true, 00:08:29.432 "data_offset": 0, 00:08:29.432 "data_size": 65536 00:08:29.432 } 00:08:29.432 ] 00:08:29.433 }' 00:08:29.433 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.433 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4c902299-df77-4cda-954f-a1ec19a4f59e 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.003 NewBaseBdev 00:08:30.003 [2024-11-27 21:40:52.940673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:30.003 [2024-11-27 21:40:52.940713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:30.003 [2024-11-27 21:40:52.940720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:30.003 [2024-11-27 21:40:52.940971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:30.003 [2024-11-27 21:40:52.941089] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:30.003 [2024-11-27 21:40:52.941102] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:30.003 [2024-11-27 21:40:52.941266] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.003 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.003 [ 00:08:30.003 { 00:08:30.003 "name": "NewBaseBdev", 00:08:30.003 "aliases": [ 00:08:30.003 "4c902299-df77-4cda-954f-a1ec19a4f59e" 00:08:30.003 ], 00:08:30.003 "product_name": "Malloc disk", 00:08:30.003 "block_size": 512, 00:08:30.003 "num_blocks": 65536, 00:08:30.003 "uuid": "4c902299-df77-4cda-954f-a1ec19a4f59e", 00:08:30.003 "assigned_rate_limits": { 00:08:30.003 "rw_ios_per_sec": 0, 00:08:30.003 "rw_mbytes_per_sec": 0, 00:08:30.003 "r_mbytes_per_sec": 0, 00:08:30.003 "w_mbytes_per_sec": 0 00:08:30.003 }, 00:08:30.003 "claimed": true, 00:08:30.003 "claim_type": "exclusive_write", 00:08:30.003 "zoned": false, 00:08:30.003 "supported_io_types": { 00:08:30.003 "read": true, 00:08:30.003 "write": true, 00:08:30.003 "unmap": true, 00:08:30.003 "flush": true, 00:08:30.004 "reset": true, 00:08:30.004 "nvme_admin": false, 00:08:30.004 "nvme_io": false, 00:08:30.004 "nvme_io_md": false, 00:08:30.004 "write_zeroes": true, 00:08:30.004 "zcopy": true, 00:08:30.004 "get_zone_info": false, 00:08:30.004 "zone_management": false, 00:08:30.004 "zone_append": false, 00:08:30.004 "compare": false, 00:08:30.004 "compare_and_write": false, 00:08:30.004 "abort": true, 00:08:30.004 "seek_hole": false, 00:08:30.004 "seek_data": false, 00:08:30.004 "copy": true, 00:08:30.004 "nvme_iov_md": false 00:08:30.004 }, 00:08:30.004 "memory_domains": [ 00:08:30.004 { 00:08:30.004 "dma_device_id": "system", 00:08:30.004 "dma_device_type": 1 00:08:30.004 }, 00:08:30.004 { 00:08:30.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.004 "dma_device_type": 2 00:08:30.004 } 00:08:30.004 ], 00:08:30.004 "driver_specific": {} 00:08:30.004 } 00:08:30.004 ] 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.004 21:40:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.004 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.004 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.004 "name": "Existed_Raid", 00:08:30.004 "uuid": "eb3b1592-48c8-459b-aaac-13ab3685678e", 00:08:30.004 "strip_size_kb": 0, 00:08:30.004 "state": "online", 00:08:30.004 "raid_level": "raid1", 00:08:30.004 "superblock": false, 00:08:30.004 "num_base_bdevs": 3, 00:08:30.004 "num_base_bdevs_discovered": 3, 00:08:30.004 "num_base_bdevs_operational": 3, 00:08:30.004 "base_bdevs_list": [ 00:08:30.004 { 00:08:30.004 "name": "NewBaseBdev", 00:08:30.004 "uuid": "4c902299-df77-4cda-954f-a1ec19a4f59e", 00:08:30.004 "is_configured": true, 00:08:30.004 "data_offset": 0, 00:08:30.004 "data_size": 65536 00:08:30.004 }, 00:08:30.004 { 00:08:30.004 "name": "BaseBdev2", 00:08:30.004 "uuid": "ff105032-ba51-4b94-b689-e77fc4da599e", 00:08:30.004 "is_configured": true, 00:08:30.004 "data_offset": 0, 00:08:30.004 "data_size": 65536 00:08:30.004 }, 00:08:30.004 { 00:08:30.004 "name": "BaseBdev3", 00:08:30.004 "uuid": "b29eba8c-5dee-487c-8554-4bc2d2860407", 00:08:30.004 "is_configured": true, 00:08:30.004 "data_offset": 0, 00:08:30.004 "data_size": 65536 00:08:30.004 } 00:08:30.004 ] 00:08:30.004 }' 00:08:30.004 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.004 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.265 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:30.265 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:30.265 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:30.265 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:30.265 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:30.265 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:30.265 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:30.265 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:30.265 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.265 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.554 [2024-11-27 21:40:53.388280] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.554 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.554 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:30.554 "name": "Existed_Raid", 00:08:30.554 "aliases": [ 00:08:30.554 "eb3b1592-48c8-459b-aaac-13ab3685678e" 00:08:30.554 ], 00:08:30.554 "product_name": "Raid Volume", 00:08:30.554 "block_size": 512, 00:08:30.554 "num_blocks": 65536, 00:08:30.554 "uuid": "eb3b1592-48c8-459b-aaac-13ab3685678e", 00:08:30.554 "assigned_rate_limits": { 00:08:30.554 "rw_ios_per_sec": 0, 00:08:30.554 "rw_mbytes_per_sec": 0, 00:08:30.554 "r_mbytes_per_sec": 0, 00:08:30.554 "w_mbytes_per_sec": 0 00:08:30.554 }, 00:08:30.554 "claimed": false, 00:08:30.554 "zoned": false, 00:08:30.554 "supported_io_types": { 00:08:30.554 "read": true, 00:08:30.554 "write": true, 00:08:30.554 "unmap": false, 00:08:30.554 "flush": false, 00:08:30.554 "reset": true, 00:08:30.554 "nvme_admin": false, 00:08:30.554 "nvme_io": false, 00:08:30.554 "nvme_io_md": false, 00:08:30.554 "write_zeroes": true, 00:08:30.554 "zcopy": false, 00:08:30.554 "get_zone_info": false, 00:08:30.554 "zone_management": false, 00:08:30.554 "zone_append": false, 00:08:30.554 "compare": false, 00:08:30.555 "compare_and_write": false, 00:08:30.555 "abort": false, 00:08:30.555 "seek_hole": false, 00:08:30.555 "seek_data": false, 00:08:30.555 "copy": false, 00:08:30.555 "nvme_iov_md": false 00:08:30.555 }, 00:08:30.555 "memory_domains": [ 00:08:30.555 { 00:08:30.555 "dma_device_id": "system", 00:08:30.555 "dma_device_type": 1 00:08:30.555 }, 00:08:30.555 { 00:08:30.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.555 "dma_device_type": 2 00:08:30.555 }, 00:08:30.555 { 00:08:30.555 "dma_device_id": "system", 00:08:30.555 "dma_device_type": 1 00:08:30.555 }, 00:08:30.555 { 00:08:30.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.555 "dma_device_type": 2 00:08:30.555 }, 00:08:30.555 { 00:08:30.555 "dma_device_id": "system", 00:08:30.555 "dma_device_type": 1 00:08:30.555 }, 00:08:30.555 { 00:08:30.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.555 "dma_device_type": 2 00:08:30.555 } 00:08:30.555 ], 00:08:30.555 "driver_specific": { 00:08:30.555 "raid": { 00:08:30.555 "uuid": "eb3b1592-48c8-459b-aaac-13ab3685678e", 00:08:30.555 "strip_size_kb": 0, 00:08:30.555 "state": "online", 00:08:30.555 "raid_level": "raid1", 00:08:30.555 "superblock": false, 00:08:30.555 "num_base_bdevs": 3, 00:08:30.555 "num_base_bdevs_discovered": 3, 00:08:30.555 "num_base_bdevs_operational": 3, 00:08:30.555 "base_bdevs_list": [ 00:08:30.555 { 00:08:30.555 "name": "NewBaseBdev", 00:08:30.555 "uuid": "4c902299-df77-4cda-954f-a1ec19a4f59e", 00:08:30.555 "is_configured": true, 00:08:30.555 "data_offset": 0, 00:08:30.555 "data_size": 65536 00:08:30.555 }, 00:08:30.555 { 00:08:30.555 "name": "BaseBdev2", 00:08:30.555 "uuid": "ff105032-ba51-4b94-b689-e77fc4da599e", 00:08:30.555 "is_configured": true, 00:08:30.555 "data_offset": 0, 00:08:30.555 "data_size": 65536 00:08:30.555 }, 00:08:30.555 { 00:08:30.555 "name": "BaseBdev3", 00:08:30.555 "uuid": "b29eba8c-5dee-487c-8554-4bc2d2860407", 00:08:30.555 "is_configured": true, 00:08:30.555 "data_offset": 0, 00:08:30.555 "data_size": 65536 00:08:30.555 } 00:08:30.555 ] 00:08:30.555 } 00:08:30.555 } 00:08:30.555 }' 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:30.555 BaseBdev2 00:08:30.555 BaseBdev3' 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.555 [2024-11-27 21:40:53.643521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.555 [2024-11-27 21:40:53.643549] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.555 [2024-11-27 21:40:53.643615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.555 [2024-11-27 21:40:53.643917] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.555 [2024-11-27 21:40:53.643928] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 78186 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 78186 ']' 00:08:30.555 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 78186 00:08:30.867 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:30.867 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.867 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78186 00:08:30.867 killing process with pid 78186 00:08:30.867 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.867 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.867 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78186' 00:08:30.867 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 78186 00:08:30.867 [2024-11-27 21:40:53.682308] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.867 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 78186 00:08:30.867 [2024-11-27 21:40:53.713160] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.867 21:40:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:30.867 ************************************ 00:08:30.867 END TEST raid_state_function_test 00:08:30.867 ************************************ 00:08:30.867 00:08:30.867 real 0m8.455s 00:08:30.867 user 0m14.418s 00:08:30.867 sys 0m1.699s 00:08:30.867 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.867 21:40:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.128 21:40:53 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:08:31.128 21:40:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:31.128 21:40:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.128 21:40:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.128 ************************************ 00:08:31.128 START TEST raid_state_function_test_sb 00:08:31.128 ************************************ 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:31.128 Process raid pid: 78785 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=78785 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 78785' 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 78785 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78785 ']' 00:08:31.128 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.129 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.129 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.129 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.129 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.129 [2024-11-27 21:40:54.093689] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:31.129 [2024-11-27 21:40:54.093910] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.129 [2024-11-27 21:40:54.226726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.389 [2024-11-27 21:40:54.251293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.389 [2024-11-27 21:40:54.293171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.389 [2024-11-27 21:40:54.293291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.959 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.960 [2024-11-27 21:40:54.915512] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:31.960 [2024-11-27 21:40:54.915634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:31.960 [2024-11-27 21:40:54.915666] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:31.960 [2024-11-27 21:40:54.915689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:31.960 [2024-11-27 21:40:54.915706] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:31.960 [2024-11-27 21:40:54.915729] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.960 "name": "Existed_Raid", 00:08:31.960 "uuid": "16080ce5-e15c-4a4f-ad79-dfaea67bf811", 00:08:31.960 "strip_size_kb": 0, 00:08:31.960 "state": "configuring", 00:08:31.960 "raid_level": "raid1", 00:08:31.960 "superblock": true, 00:08:31.960 "num_base_bdevs": 3, 00:08:31.960 "num_base_bdevs_discovered": 0, 00:08:31.960 "num_base_bdevs_operational": 3, 00:08:31.960 "base_bdevs_list": [ 00:08:31.960 { 00:08:31.960 "name": "BaseBdev1", 00:08:31.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.960 "is_configured": false, 00:08:31.960 "data_offset": 0, 00:08:31.960 "data_size": 0 00:08:31.960 }, 00:08:31.960 { 00:08:31.960 "name": "BaseBdev2", 00:08:31.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.960 "is_configured": false, 00:08:31.960 "data_offset": 0, 00:08:31.960 "data_size": 0 00:08:31.960 }, 00:08:31.960 { 00:08:31.960 "name": "BaseBdev3", 00:08:31.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.960 "is_configured": false, 00:08:31.960 "data_offset": 0, 00:08:31.960 "data_size": 0 00:08:31.960 } 00:08:31.960 ] 00:08:31.960 }' 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.960 21:40:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.528 [2024-11-27 21:40:55.386589] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.528 [2024-11-27 21:40:55.386671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.528 [2024-11-27 21:40:55.394602] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:32.528 [2024-11-27 21:40:55.394676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:32.528 [2024-11-27 21:40:55.394701] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.528 [2024-11-27 21:40:55.394724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.528 [2024-11-27 21:40:55.394742] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:32.528 [2024-11-27 21:40:55.394762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.528 [2024-11-27 21:40:55.411321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.528 BaseBdev1 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.528 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.528 [ 00:08:32.528 { 00:08:32.528 "name": "BaseBdev1", 00:08:32.528 "aliases": [ 00:08:32.528 "b3ed8965-906a-4365-b60b-a93f02051311" 00:08:32.528 ], 00:08:32.528 "product_name": "Malloc disk", 00:08:32.528 "block_size": 512, 00:08:32.528 "num_blocks": 65536, 00:08:32.528 "uuid": "b3ed8965-906a-4365-b60b-a93f02051311", 00:08:32.528 "assigned_rate_limits": { 00:08:32.528 "rw_ios_per_sec": 0, 00:08:32.528 "rw_mbytes_per_sec": 0, 00:08:32.528 "r_mbytes_per_sec": 0, 00:08:32.528 "w_mbytes_per_sec": 0 00:08:32.528 }, 00:08:32.528 "claimed": true, 00:08:32.528 "claim_type": "exclusive_write", 00:08:32.528 "zoned": false, 00:08:32.528 "supported_io_types": { 00:08:32.528 "read": true, 00:08:32.528 "write": true, 00:08:32.528 "unmap": true, 00:08:32.528 "flush": true, 00:08:32.528 "reset": true, 00:08:32.528 "nvme_admin": false, 00:08:32.528 "nvme_io": false, 00:08:32.528 "nvme_io_md": false, 00:08:32.528 "write_zeroes": true, 00:08:32.528 "zcopy": true, 00:08:32.528 "get_zone_info": false, 00:08:32.528 "zone_management": false, 00:08:32.528 "zone_append": false, 00:08:32.528 "compare": false, 00:08:32.528 "compare_and_write": false, 00:08:32.528 "abort": true, 00:08:32.528 "seek_hole": false, 00:08:32.528 "seek_data": false, 00:08:32.528 "copy": true, 00:08:32.528 "nvme_iov_md": false 00:08:32.528 }, 00:08:32.528 "memory_domains": [ 00:08:32.528 { 00:08:32.528 "dma_device_id": "system", 00:08:32.528 "dma_device_type": 1 00:08:32.529 }, 00:08:32.529 { 00:08:32.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.529 "dma_device_type": 2 00:08:32.529 } 00:08:32.529 ], 00:08:32.529 "driver_specific": {} 00:08:32.529 } 00:08:32.529 ] 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.529 "name": "Existed_Raid", 00:08:32.529 "uuid": "a56606cb-6d69-4ff1-a0db-e2b0a771bfac", 00:08:32.529 "strip_size_kb": 0, 00:08:32.529 "state": "configuring", 00:08:32.529 "raid_level": "raid1", 00:08:32.529 "superblock": true, 00:08:32.529 "num_base_bdevs": 3, 00:08:32.529 "num_base_bdevs_discovered": 1, 00:08:32.529 "num_base_bdevs_operational": 3, 00:08:32.529 "base_bdevs_list": [ 00:08:32.529 { 00:08:32.529 "name": "BaseBdev1", 00:08:32.529 "uuid": "b3ed8965-906a-4365-b60b-a93f02051311", 00:08:32.529 "is_configured": true, 00:08:32.529 "data_offset": 2048, 00:08:32.529 "data_size": 63488 00:08:32.529 }, 00:08:32.529 { 00:08:32.529 "name": "BaseBdev2", 00:08:32.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.529 "is_configured": false, 00:08:32.529 "data_offset": 0, 00:08:32.529 "data_size": 0 00:08:32.529 }, 00:08:32.529 { 00:08:32.529 "name": "BaseBdev3", 00:08:32.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.529 "is_configured": false, 00:08:32.529 "data_offset": 0, 00:08:32.529 "data_size": 0 00:08:32.529 } 00:08:32.529 ] 00:08:32.529 }' 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.529 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.789 [2024-11-27 21:40:55.870575] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.789 [2024-11-27 21:40:55.870618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.789 [2024-11-27 21:40:55.882587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.789 [2024-11-27 21:40:55.884472] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.789 [2024-11-27 21:40:55.884548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.789 [2024-11-27 21:40:55.884561] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:32.789 [2024-11-27 21:40:55.884571] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.789 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.048 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.048 "name": "Existed_Raid", 00:08:33.048 "uuid": "0b2afb60-f21c-4172-97d1-6685e3ac7a9b", 00:08:33.048 "strip_size_kb": 0, 00:08:33.048 "state": "configuring", 00:08:33.048 "raid_level": "raid1", 00:08:33.048 "superblock": true, 00:08:33.048 "num_base_bdevs": 3, 00:08:33.048 "num_base_bdevs_discovered": 1, 00:08:33.048 "num_base_bdevs_operational": 3, 00:08:33.048 "base_bdevs_list": [ 00:08:33.048 { 00:08:33.048 "name": "BaseBdev1", 00:08:33.048 "uuid": "b3ed8965-906a-4365-b60b-a93f02051311", 00:08:33.048 "is_configured": true, 00:08:33.048 "data_offset": 2048, 00:08:33.048 "data_size": 63488 00:08:33.048 }, 00:08:33.048 { 00:08:33.048 "name": "BaseBdev2", 00:08:33.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.048 "is_configured": false, 00:08:33.048 "data_offset": 0, 00:08:33.048 "data_size": 0 00:08:33.048 }, 00:08:33.048 { 00:08:33.048 "name": "BaseBdev3", 00:08:33.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.048 "is_configured": false, 00:08:33.048 "data_offset": 0, 00:08:33.048 "data_size": 0 00:08:33.048 } 00:08:33.048 ] 00:08:33.048 }' 00:08:33.048 21:40:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.048 21:40:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.308 [2024-11-27 21:40:56.324676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.308 BaseBdev2 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.308 [ 00:08:33.308 { 00:08:33.308 "name": "BaseBdev2", 00:08:33.308 "aliases": [ 00:08:33.308 "f36a0496-0c9b-4761-be7f-62a6754045f4" 00:08:33.308 ], 00:08:33.308 "product_name": "Malloc disk", 00:08:33.308 "block_size": 512, 00:08:33.308 "num_blocks": 65536, 00:08:33.308 "uuid": "f36a0496-0c9b-4761-be7f-62a6754045f4", 00:08:33.308 "assigned_rate_limits": { 00:08:33.308 "rw_ios_per_sec": 0, 00:08:33.308 "rw_mbytes_per_sec": 0, 00:08:33.308 "r_mbytes_per_sec": 0, 00:08:33.308 "w_mbytes_per_sec": 0 00:08:33.308 }, 00:08:33.308 "claimed": true, 00:08:33.308 "claim_type": "exclusive_write", 00:08:33.308 "zoned": false, 00:08:33.308 "supported_io_types": { 00:08:33.308 "read": true, 00:08:33.308 "write": true, 00:08:33.308 "unmap": true, 00:08:33.308 "flush": true, 00:08:33.308 "reset": true, 00:08:33.308 "nvme_admin": false, 00:08:33.308 "nvme_io": false, 00:08:33.308 "nvme_io_md": false, 00:08:33.308 "write_zeroes": true, 00:08:33.308 "zcopy": true, 00:08:33.308 "get_zone_info": false, 00:08:33.308 "zone_management": false, 00:08:33.308 "zone_append": false, 00:08:33.308 "compare": false, 00:08:33.308 "compare_and_write": false, 00:08:33.308 "abort": true, 00:08:33.308 "seek_hole": false, 00:08:33.308 "seek_data": false, 00:08:33.308 "copy": true, 00:08:33.308 "nvme_iov_md": false 00:08:33.308 }, 00:08:33.308 "memory_domains": [ 00:08:33.308 { 00:08:33.308 "dma_device_id": "system", 00:08:33.308 "dma_device_type": 1 00:08:33.308 }, 00:08:33.308 { 00:08:33.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.308 "dma_device_type": 2 00:08:33.308 } 00:08:33.308 ], 00:08:33.308 "driver_specific": {} 00:08:33.308 } 00:08:33.308 ] 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.308 "name": "Existed_Raid", 00:08:33.308 "uuid": "0b2afb60-f21c-4172-97d1-6685e3ac7a9b", 00:08:33.308 "strip_size_kb": 0, 00:08:33.308 "state": "configuring", 00:08:33.308 "raid_level": "raid1", 00:08:33.308 "superblock": true, 00:08:33.308 "num_base_bdevs": 3, 00:08:33.308 "num_base_bdevs_discovered": 2, 00:08:33.308 "num_base_bdevs_operational": 3, 00:08:33.308 "base_bdevs_list": [ 00:08:33.308 { 00:08:33.308 "name": "BaseBdev1", 00:08:33.308 "uuid": "b3ed8965-906a-4365-b60b-a93f02051311", 00:08:33.308 "is_configured": true, 00:08:33.308 "data_offset": 2048, 00:08:33.308 "data_size": 63488 00:08:33.308 }, 00:08:33.308 { 00:08:33.308 "name": "BaseBdev2", 00:08:33.308 "uuid": "f36a0496-0c9b-4761-be7f-62a6754045f4", 00:08:33.308 "is_configured": true, 00:08:33.308 "data_offset": 2048, 00:08:33.308 "data_size": 63488 00:08:33.308 }, 00:08:33.308 { 00:08:33.308 "name": "BaseBdev3", 00:08:33.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.308 "is_configured": false, 00:08:33.308 "data_offset": 0, 00:08:33.308 "data_size": 0 00:08:33.308 } 00:08:33.308 ] 00:08:33.308 }' 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.308 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.877 [2024-11-27 21:40:56.803700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:33.877 [2024-11-27 21:40:56.803945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:33.877 [2024-11-27 21:40:56.803967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:33.877 [2024-11-27 21:40:56.804312] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:33.877 BaseBdev3 00:08:33.877 [2024-11-27 21:40:56.804498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:33.877 [2024-11-27 21:40:56.804536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:33.877 [2024-11-27 21:40:56.804689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.877 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.877 [ 00:08:33.877 { 00:08:33.877 "name": "BaseBdev3", 00:08:33.877 "aliases": [ 00:08:33.877 "11a0fec2-f3e6-4ce2-a445-e13cc33291c0" 00:08:33.877 ], 00:08:33.877 "product_name": "Malloc disk", 00:08:33.877 "block_size": 512, 00:08:33.878 "num_blocks": 65536, 00:08:33.878 "uuid": "11a0fec2-f3e6-4ce2-a445-e13cc33291c0", 00:08:33.878 "assigned_rate_limits": { 00:08:33.878 "rw_ios_per_sec": 0, 00:08:33.878 "rw_mbytes_per_sec": 0, 00:08:33.878 "r_mbytes_per_sec": 0, 00:08:33.878 "w_mbytes_per_sec": 0 00:08:33.878 }, 00:08:33.878 "claimed": true, 00:08:33.878 "claim_type": "exclusive_write", 00:08:33.878 "zoned": false, 00:08:33.878 "supported_io_types": { 00:08:33.878 "read": true, 00:08:33.878 "write": true, 00:08:33.878 "unmap": true, 00:08:33.878 "flush": true, 00:08:33.878 "reset": true, 00:08:33.878 "nvme_admin": false, 00:08:33.878 "nvme_io": false, 00:08:33.878 "nvme_io_md": false, 00:08:33.878 "write_zeroes": true, 00:08:33.878 "zcopy": true, 00:08:33.878 "get_zone_info": false, 00:08:33.878 "zone_management": false, 00:08:33.878 "zone_append": false, 00:08:33.878 "compare": false, 00:08:33.878 "compare_and_write": false, 00:08:33.878 "abort": true, 00:08:33.878 "seek_hole": false, 00:08:33.878 "seek_data": false, 00:08:33.878 "copy": true, 00:08:33.878 "nvme_iov_md": false 00:08:33.878 }, 00:08:33.878 "memory_domains": [ 00:08:33.878 { 00:08:33.878 "dma_device_id": "system", 00:08:33.878 "dma_device_type": 1 00:08:33.878 }, 00:08:33.878 { 00:08:33.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.878 "dma_device_type": 2 00:08:33.878 } 00:08:33.878 ], 00:08:33.878 "driver_specific": {} 00:08:33.878 } 00:08:33.878 ] 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.878 "name": "Existed_Raid", 00:08:33.878 "uuid": "0b2afb60-f21c-4172-97d1-6685e3ac7a9b", 00:08:33.878 "strip_size_kb": 0, 00:08:33.878 "state": "online", 00:08:33.878 "raid_level": "raid1", 00:08:33.878 "superblock": true, 00:08:33.878 "num_base_bdevs": 3, 00:08:33.878 "num_base_bdevs_discovered": 3, 00:08:33.878 "num_base_bdevs_operational": 3, 00:08:33.878 "base_bdevs_list": [ 00:08:33.878 { 00:08:33.878 "name": "BaseBdev1", 00:08:33.878 "uuid": "b3ed8965-906a-4365-b60b-a93f02051311", 00:08:33.878 "is_configured": true, 00:08:33.878 "data_offset": 2048, 00:08:33.878 "data_size": 63488 00:08:33.878 }, 00:08:33.878 { 00:08:33.878 "name": "BaseBdev2", 00:08:33.878 "uuid": "f36a0496-0c9b-4761-be7f-62a6754045f4", 00:08:33.878 "is_configured": true, 00:08:33.878 "data_offset": 2048, 00:08:33.878 "data_size": 63488 00:08:33.878 }, 00:08:33.878 { 00:08:33.878 "name": "BaseBdev3", 00:08:33.878 "uuid": "11a0fec2-f3e6-4ce2-a445-e13cc33291c0", 00:08:33.878 "is_configured": true, 00:08:33.878 "data_offset": 2048, 00:08:33.878 "data_size": 63488 00:08:33.878 } 00:08:33.878 ] 00:08:33.878 }' 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.878 21:40:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.137 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:34.137 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:34.137 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:34.137 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:34.137 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:34.137 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:34.396 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:34.396 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:34.396 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.396 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.396 [2024-11-27 21:40:57.267199] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.396 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.396 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:34.396 "name": "Existed_Raid", 00:08:34.396 "aliases": [ 00:08:34.396 "0b2afb60-f21c-4172-97d1-6685e3ac7a9b" 00:08:34.396 ], 00:08:34.396 "product_name": "Raid Volume", 00:08:34.396 "block_size": 512, 00:08:34.396 "num_blocks": 63488, 00:08:34.396 "uuid": "0b2afb60-f21c-4172-97d1-6685e3ac7a9b", 00:08:34.396 "assigned_rate_limits": { 00:08:34.396 "rw_ios_per_sec": 0, 00:08:34.396 "rw_mbytes_per_sec": 0, 00:08:34.396 "r_mbytes_per_sec": 0, 00:08:34.396 "w_mbytes_per_sec": 0 00:08:34.396 }, 00:08:34.396 "claimed": false, 00:08:34.396 "zoned": false, 00:08:34.396 "supported_io_types": { 00:08:34.396 "read": true, 00:08:34.396 "write": true, 00:08:34.396 "unmap": false, 00:08:34.396 "flush": false, 00:08:34.396 "reset": true, 00:08:34.396 "nvme_admin": false, 00:08:34.396 "nvme_io": false, 00:08:34.396 "nvme_io_md": false, 00:08:34.397 "write_zeroes": true, 00:08:34.397 "zcopy": false, 00:08:34.397 "get_zone_info": false, 00:08:34.397 "zone_management": false, 00:08:34.397 "zone_append": false, 00:08:34.397 "compare": false, 00:08:34.397 "compare_and_write": false, 00:08:34.397 "abort": false, 00:08:34.397 "seek_hole": false, 00:08:34.397 "seek_data": false, 00:08:34.397 "copy": false, 00:08:34.397 "nvme_iov_md": false 00:08:34.397 }, 00:08:34.397 "memory_domains": [ 00:08:34.397 { 00:08:34.397 "dma_device_id": "system", 00:08:34.397 "dma_device_type": 1 00:08:34.397 }, 00:08:34.397 { 00:08:34.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.397 "dma_device_type": 2 00:08:34.397 }, 00:08:34.397 { 00:08:34.397 "dma_device_id": "system", 00:08:34.397 "dma_device_type": 1 00:08:34.397 }, 00:08:34.397 { 00:08:34.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.397 "dma_device_type": 2 00:08:34.397 }, 00:08:34.397 { 00:08:34.397 "dma_device_id": "system", 00:08:34.397 "dma_device_type": 1 00:08:34.397 }, 00:08:34.397 { 00:08:34.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.397 "dma_device_type": 2 00:08:34.397 } 00:08:34.397 ], 00:08:34.397 "driver_specific": { 00:08:34.397 "raid": { 00:08:34.397 "uuid": "0b2afb60-f21c-4172-97d1-6685e3ac7a9b", 00:08:34.397 "strip_size_kb": 0, 00:08:34.397 "state": "online", 00:08:34.397 "raid_level": "raid1", 00:08:34.397 "superblock": true, 00:08:34.397 "num_base_bdevs": 3, 00:08:34.397 "num_base_bdevs_discovered": 3, 00:08:34.397 "num_base_bdevs_operational": 3, 00:08:34.397 "base_bdevs_list": [ 00:08:34.397 { 00:08:34.397 "name": "BaseBdev1", 00:08:34.397 "uuid": "b3ed8965-906a-4365-b60b-a93f02051311", 00:08:34.397 "is_configured": true, 00:08:34.397 "data_offset": 2048, 00:08:34.397 "data_size": 63488 00:08:34.397 }, 00:08:34.397 { 00:08:34.397 "name": "BaseBdev2", 00:08:34.397 "uuid": "f36a0496-0c9b-4761-be7f-62a6754045f4", 00:08:34.397 "is_configured": true, 00:08:34.397 "data_offset": 2048, 00:08:34.397 "data_size": 63488 00:08:34.397 }, 00:08:34.397 { 00:08:34.397 "name": "BaseBdev3", 00:08:34.397 "uuid": "11a0fec2-f3e6-4ce2-a445-e13cc33291c0", 00:08:34.397 "is_configured": true, 00:08:34.397 "data_offset": 2048, 00:08:34.397 "data_size": 63488 00:08:34.397 } 00:08:34.397 ] 00:08:34.397 } 00:08:34.397 } 00:08:34.397 }' 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:34.397 BaseBdev2 00:08:34.397 BaseBdev3' 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.397 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.397 [2024-11-27 21:40:57.506538] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:34.656 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.656 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:34.656 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.657 "name": "Existed_Raid", 00:08:34.657 "uuid": "0b2afb60-f21c-4172-97d1-6685e3ac7a9b", 00:08:34.657 "strip_size_kb": 0, 00:08:34.657 "state": "online", 00:08:34.657 "raid_level": "raid1", 00:08:34.657 "superblock": true, 00:08:34.657 "num_base_bdevs": 3, 00:08:34.657 "num_base_bdevs_discovered": 2, 00:08:34.657 "num_base_bdevs_operational": 2, 00:08:34.657 "base_bdevs_list": [ 00:08:34.657 { 00:08:34.657 "name": null, 00:08:34.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.657 "is_configured": false, 00:08:34.657 "data_offset": 0, 00:08:34.657 "data_size": 63488 00:08:34.657 }, 00:08:34.657 { 00:08:34.657 "name": "BaseBdev2", 00:08:34.657 "uuid": "f36a0496-0c9b-4761-be7f-62a6754045f4", 00:08:34.657 "is_configured": true, 00:08:34.657 "data_offset": 2048, 00:08:34.657 "data_size": 63488 00:08:34.657 }, 00:08:34.657 { 00:08:34.657 "name": "BaseBdev3", 00:08:34.657 "uuid": "11a0fec2-f3e6-4ce2-a445-e13cc33291c0", 00:08:34.657 "is_configured": true, 00:08:34.657 "data_offset": 2048, 00:08:34.657 "data_size": 63488 00:08:34.657 } 00:08:34.657 ] 00:08:34.657 }' 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.657 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.916 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:34.916 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:34.916 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.916 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.916 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.916 21:40:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:34.916 21:40:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.916 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:34.916 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:34.916 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:34.916 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.916 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:34.916 [2024-11-27 21:40:58.009040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:34.916 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.916 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:34.916 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:34.916 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.916 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:34.916 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.916 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.176 [2024-11-27 21:40:58.064110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:35.176 [2024-11-27 21:40:58.064232] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.176 [2024-11-27 21:40:58.075676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.176 [2024-11-27 21:40:58.075724] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.176 [2024-11-27 21:40:58.075737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.176 BaseBdev2 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.176 [ 00:08:35.176 { 00:08:35.176 "name": "BaseBdev2", 00:08:35.176 "aliases": [ 00:08:35.176 "80cef721-c5b5-4539-a659-b3d3eba153f2" 00:08:35.176 ], 00:08:35.176 "product_name": "Malloc disk", 00:08:35.176 "block_size": 512, 00:08:35.176 "num_blocks": 65536, 00:08:35.176 "uuid": "80cef721-c5b5-4539-a659-b3d3eba153f2", 00:08:35.176 "assigned_rate_limits": { 00:08:35.176 "rw_ios_per_sec": 0, 00:08:35.176 "rw_mbytes_per_sec": 0, 00:08:35.176 "r_mbytes_per_sec": 0, 00:08:35.176 "w_mbytes_per_sec": 0 00:08:35.176 }, 00:08:35.176 "claimed": false, 00:08:35.176 "zoned": false, 00:08:35.176 "supported_io_types": { 00:08:35.176 "read": true, 00:08:35.176 "write": true, 00:08:35.176 "unmap": true, 00:08:35.176 "flush": true, 00:08:35.176 "reset": true, 00:08:35.176 "nvme_admin": false, 00:08:35.176 "nvme_io": false, 00:08:35.176 "nvme_io_md": false, 00:08:35.176 "write_zeroes": true, 00:08:35.176 "zcopy": true, 00:08:35.176 "get_zone_info": false, 00:08:35.176 "zone_management": false, 00:08:35.176 "zone_append": false, 00:08:35.176 "compare": false, 00:08:35.176 "compare_and_write": false, 00:08:35.176 "abort": true, 00:08:35.176 "seek_hole": false, 00:08:35.176 "seek_data": false, 00:08:35.176 "copy": true, 00:08:35.176 "nvme_iov_md": false 00:08:35.176 }, 00:08:35.176 "memory_domains": [ 00:08:35.176 { 00:08:35.176 "dma_device_id": "system", 00:08:35.176 "dma_device_type": 1 00:08:35.176 }, 00:08:35.176 { 00:08:35.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.176 "dma_device_type": 2 00:08:35.176 } 00:08:35.176 ], 00:08:35.176 "driver_specific": {} 00:08:35.176 } 00:08:35.176 ] 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.176 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.177 BaseBdev3 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.177 [ 00:08:35.177 { 00:08:35.177 "name": "BaseBdev3", 00:08:35.177 "aliases": [ 00:08:35.177 "50d45bf4-8815-40d9-a48d-be717a097cc7" 00:08:35.177 ], 00:08:35.177 "product_name": "Malloc disk", 00:08:35.177 "block_size": 512, 00:08:35.177 "num_blocks": 65536, 00:08:35.177 "uuid": "50d45bf4-8815-40d9-a48d-be717a097cc7", 00:08:35.177 "assigned_rate_limits": { 00:08:35.177 "rw_ios_per_sec": 0, 00:08:35.177 "rw_mbytes_per_sec": 0, 00:08:35.177 "r_mbytes_per_sec": 0, 00:08:35.177 "w_mbytes_per_sec": 0 00:08:35.177 }, 00:08:35.177 "claimed": false, 00:08:35.177 "zoned": false, 00:08:35.177 "supported_io_types": { 00:08:35.177 "read": true, 00:08:35.177 "write": true, 00:08:35.177 "unmap": true, 00:08:35.177 "flush": true, 00:08:35.177 "reset": true, 00:08:35.177 "nvme_admin": false, 00:08:35.177 "nvme_io": false, 00:08:35.177 "nvme_io_md": false, 00:08:35.177 "write_zeroes": true, 00:08:35.177 "zcopy": true, 00:08:35.177 "get_zone_info": false, 00:08:35.177 "zone_management": false, 00:08:35.177 "zone_append": false, 00:08:35.177 "compare": false, 00:08:35.177 "compare_and_write": false, 00:08:35.177 "abort": true, 00:08:35.177 "seek_hole": false, 00:08:35.177 "seek_data": false, 00:08:35.177 "copy": true, 00:08:35.177 "nvme_iov_md": false 00:08:35.177 }, 00:08:35.177 "memory_domains": [ 00:08:35.177 { 00:08:35.177 "dma_device_id": "system", 00:08:35.177 "dma_device_type": 1 00:08:35.177 }, 00:08:35.177 { 00:08:35.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.177 "dma_device_type": 2 00:08:35.177 } 00:08:35.177 ], 00:08:35.177 "driver_specific": {} 00:08:35.177 } 00:08:35.177 ] 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.177 [2024-11-27 21:40:58.238740] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.177 [2024-11-27 21:40:58.238847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.177 [2024-11-27 21:40:58.238895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.177 [2024-11-27 21:40:58.240674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.177 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.437 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.437 "name": "Existed_Raid", 00:08:35.437 "uuid": "397e1967-5bf9-4313-98f6-57a246b184d5", 00:08:35.437 "strip_size_kb": 0, 00:08:35.437 "state": "configuring", 00:08:35.437 "raid_level": "raid1", 00:08:35.437 "superblock": true, 00:08:35.437 "num_base_bdevs": 3, 00:08:35.437 "num_base_bdevs_discovered": 2, 00:08:35.437 "num_base_bdevs_operational": 3, 00:08:35.437 "base_bdevs_list": [ 00:08:35.437 { 00:08:35.437 "name": "BaseBdev1", 00:08:35.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.437 "is_configured": false, 00:08:35.437 "data_offset": 0, 00:08:35.437 "data_size": 0 00:08:35.437 }, 00:08:35.437 { 00:08:35.437 "name": "BaseBdev2", 00:08:35.437 "uuid": "80cef721-c5b5-4539-a659-b3d3eba153f2", 00:08:35.437 "is_configured": true, 00:08:35.437 "data_offset": 2048, 00:08:35.437 "data_size": 63488 00:08:35.437 }, 00:08:35.437 { 00:08:35.437 "name": "BaseBdev3", 00:08:35.437 "uuid": "50d45bf4-8815-40d9-a48d-be717a097cc7", 00:08:35.437 "is_configured": true, 00:08:35.437 "data_offset": 2048, 00:08:35.437 "data_size": 63488 00:08:35.437 } 00:08:35.437 ] 00:08:35.437 }' 00:08:35.437 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.437 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.696 [2024-11-27 21:40:58.677972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.696 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.697 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.697 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:35.697 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.697 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.697 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.697 "name": "Existed_Raid", 00:08:35.697 "uuid": "397e1967-5bf9-4313-98f6-57a246b184d5", 00:08:35.697 "strip_size_kb": 0, 00:08:35.697 "state": "configuring", 00:08:35.697 "raid_level": "raid1", 00:08:35.697 "superblock": true, 00:08:35.697 "num_base_bdevs": 3, 00:08:35.697 "num_base_bdevs_discovered": 1, 00:08:35.697 "num_base_bdevs_operational": 3, 00:08:35.697 "base_bdevs_list": [ 00:08:35.697 { 00:08:35.697 "name": "BaseBdev1", 00:08:35.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.697 "is_configured": false, 00:08:35.697 "data_offset": 0, 00:08:35.697 "data_size": 0 00:08:35.697 }, 00:08:35.697 { 00:08:35.697 "name": null, 00:08:35.697 "uuid": "80cef721-c5b5-4539-a659-b3d3eba153f2", 00:08:35.697 "is_configured": false, 00:08:35.697 "data_offset": 0, 00:08:35.697 "data_size": 63488 00:08:35.697 }, 00:08:35.697 { 00:08:35.697 "name": "BaseBdev3", 00:08:35.697 "uuid": "50d45bf4-8815-40d9-a48d-be717a097cc7", 00:08:35.697 "is_configured": true, 00:08:35.697 "data_offset": 2048, 00:08:35.697 "data_size": 63488 00:08:35.697 } 00:08:35.697 ] 00:08:35.697 }' 00:08:35.697 21:40:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.697 21:40:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.268 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.268 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:36.268 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.269 [2024-11-27 21:40:59.136154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.269 BaseBdev1 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.269 [ 00:08:36.269 { 00:08:36.269 "name": "BaseBdev1", 00:08:36.269 "aliases": [ 00:08:36.269 "2654e6a6-5fe4-4b4a-8d4b-fbec8b8cf880" 00:08:36.269 ], 00:08:36.269 "product_name": "Malloc disk", 00:08:36.269 "block_size": 512, 00:08:36.269 "num_blocks": 65536, 00:08:36.269 "uuid": "2654e6a6-5fe4-4b4a-8d4b-fbec8b8cf880", 00:08:36.269 "assigned_rate_limits": { 00:08:36.269 "rw_ios_per_sec": 0, 00:08:36.269 "rw_mbytes_per_sec": 0, 00:08:36.269 "r_mbytes_per_sec": 0, 00:08:36.269 "w_mbytes_per_sec": 0 00:08:36.269 }, 00:08:36.269 "claimed": true, 00:08:36.269 "claim_type": "exclusive_write", 00:08:36.269 "zoned": false, 00:08:36.269 "supported_io_types": { 00:08:36.269 "read": true, 00:08:36.269 "write": true, 00:08:36.269 "unmap": true, 00:08:36.269 "flush": true, 00:08:36.269 "reset": true, 00:08:36.269 "nvme_admin": false, 00:08:36.269 "nvme_io": false, 00:08:36.269 "nvme_io_md": false, 00:08:36.269 "write_zeroes": true, 00:08:36.269 "zcopy": true, 00:08:36.269 "get_zone_info": false, 00:08:36.269 "zone_management": false, 00:08:36.269 "zone_append": false, 00:08:36.269 "compare": false, 00:08:36.269 "compare_and_write": false, 00:08:36.269 "abort": true, 00:08:36.269 "seek_hole": false, 00:08:36.269 "seek_data": false, 00:08:36.269 "copy": true, 00:08:36.269 "nvme_iov_md": false 00:08:36.269 }, 00:08:36.269 "memory_domains": [ 00:08:36.269 { 00:08:36.269 "dma_device_id": "system", 00:08:36.269 "dma_device_type": 1 00:08:36.269 }, 00:08:36.269 { 00:08:36.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.269 "dma_device_type": 2 00:08:36.269 } 00:08:36.269 ], 00:08:36.269 "driver_specific": {} 00:08:36.269 } 00:08:36.269 ] 00:08:36.269 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.270 "name": "Existed_Raid", 00:08:36.270 "uuid": "397e1967-5bf9-4313-98f6-57a246b184d5", 00:08:36.270 "strip_size_kb": 0, 00:08:36.270 "state": "configuring", 00:08:36.270 "raid_level": "raid1", 00:08:36.270 "superblock": true, 00:08:36.270 "num_base_bdevs": 3, 00:08:36.270 "num_base_bdevs_discovered": 2, 00:08:36.270 "num_base_bdevs_operational": 3, 00:08:36.270 "base_bdevs_list": [ 00:08:36.270 { 00:08:36.270 "name": "BaseBdev1", 00:08:36.270 "uuid": "2654e6a6-5fe4-4b4a-8d4b-fbec8b8cf880", 00:08:36.270 "is_configured": true, 00:08:36.270 "data_offset": 2048, 00:08:36.270 "data_size": 63488 00:08:36.270 }, 00:08:36.270 { 00:08:36.270 "name": null, 00:08:36.270 "uuid": "80cef721-c5b5-4539-a659-b3d3eba153f2", 00:08:36.270 "is_configured": false, 00:08:36.270 "data_offset": 0, 00:08:36.270 "data_size": 63488 00:08:36.270 }, 00:08:36.270 { 00:08:36.270 "name": "BaseBdev3", 00:08:36.270 "uuid": "50d45bf4-8815-40d9-a48d-be717a097cc7", 00:08:36.270 "is_configured": true, 00:08:36.270 "data_offset": 2048, 00:08:36.270 "data_size": 63488 00:08:36.270 } 00:08:36.270 ] 00:08:36.270 }' 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.270 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.529 [2024-11-27 21:40:59.603406] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.529 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.529 "name": "Existed_Raid", 00:08:36.529 "uuid": "397e1967-5bf9-4313-98f6-57a246b184d5", 00:08:36.529 "strip_size_kb": 0, 00:08:36.529 "state": "configuring", 00:08:36.529 "raid_level": "raid1", 00:08:36.529 "superblock": true, 00:08:36.529 "num_base_bdevs": 3, 00:08:36.529 "num_base_bdevs_discovered": 1, 00:08:36.530 "num_base_bdevs_operational": 3, 00:08:36.530 "base_bdevs_list": [ 00:08:36.530 { 00:08:36.530 "name": "BaseBdev1", 00:08:36.530 "uuid": "2654e6a6-5fe4-4b4a-8d4b-fbec8b8cf880", 00:08:36.530 "is_configured": true, 00:08:36.530 "data_offset": 2048, 00:08:36.530 "data_size": 63488 00:08:36.530 }, 00:08:36.530 { 00:08:36.530 "name": null, 00:08:36.530 "uuid": "80cef721-c5b5-4539-a659-b3d3eba153f2", 00:08:36.530 "is_configured": false, 00:08:36.530 "data_offset": 0, 00:08:36.530 "data_size": 63488 00:08:36.530 }, 00:08:36.530 { 00:08:36.530 "name": null, 00:08:36.530 "uuid": "50d45bf4-8815-40d9-a48d-be717a097cc7", 00:08:36.530 "is_configured": false, 00:08:36.530 "data_offset": 0, 00:08:36.530 "data_size": 63488 00:08:36.530 } 00:08:36.530 ] 00:08:36.530 }' 00:08:36.530 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.530 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.098 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.098 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.098 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.098 21:40:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:37.098 21:40:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.098 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:37.098 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:37.098 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.098 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.098 [2024-11-27 21:41:00.034739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:37.098 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.098 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:37.098 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.098 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.098 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.099 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.099 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.099 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.099 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.099 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.099 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.099 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.099 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.099 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.099 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.099 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.099 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.099 "name": "Existed_Raid", 00:08:37.099 "uuid": "397e1967-5bf9-4313-98f6-57a246b184d5", 00:08:37.099 "strip_size_kb": 0, 00:08:37.099 "state": "configuring", 00:08:37.099 "raid_level": "raid1", 00:08:37.099 "superblock": true, 00:08:37.099 "num_base_bdevs": 3, 00:08:37.099 "num_base_bdevs_discovered": 2, 00:08:37.099 "num_base_bdevs_operational": 3, 00:08:37.099 "base_bdevs_list": [ 00:08:37.099 { 00:08:37.099 "name": "BaseBdev1", 00:08:37.099 "uuid": "2654e6a6-5fe4-4b4a-8d4b-fbec8b8cf880", 00:08:37.099 "is_configured": true, 00:08:37.099 "data_offset": 2048, 00:08:37.099 "data_size": 63488 00:08:37.099 }, 00:08:37.099 { 00:08:37.099 "name": null, 00:08:37.099 "uuid": "80cef721-c5b5-4539-a659-b3d3eba153f2", 00:08:37.099 "is_configured": false, 00:08:37.099 "data_offset": 0, 00:08:37.099 "data_size": 63488 00:08:37.099 }, 00:08:37.099 { 00:08:37.099 "name": "BaseBdev3", 00:08:37.099 "uuid": "50d45bf4-8815-40d9-a48d-be717a097cc7", 00:08:37.099 "is_configured": true, 00:08:37.099 "data_offset": 2048, 00:08:37.099 "data_size": 63488 00:08:37.099 } 00:08:37.099 ] 00:08:37.099 }' 00:08:37.099 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.099 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.358 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.358 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.358 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.358 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:37.358 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.617 [2024-11-27 21:41:00.501950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.617 "name": "Existed_Raid", 00:08:37.617 "uuid": "397e1967-5bf9-4313-98f6-57a246b184d5", 00:08:37.617 "strip_size_kb": 0, 00:08:37.617 "state": "configuring", 00:08:37.617 "raid_level": "raid1", 00:08:37.617 "superblock": true, 00:08:37.617 "num_base_bdevs": 3, 00:08:37.617 "num_base_bdevs_discovered": 1, 00:08:37.617 "num_base_bdevs_operational": 3, 00:08:37.617 "base_bdevs_list": [ 00:08:37.617 { 00:08:37.617 "name": null, 00:08:37.617 "uuid": "2654e6a6-5fe4-4b4a-8d4b-fbec8b8cf880", 00:08:37.617 "is_configured": false, 00:08:37.617 "data_offset": 0, 00:08:37.617 "data_size": 63488 00:08:37.617 }, 00:08:37.617 { 00:08:37.617 "name": null, 00:08:37.617 "uuid": "80cef721-c5b5-4539-a659-b3d3eba153f2", 00:08:37.617 "is_configured": false, 00:08:37.617 "data_offset": 0, 00:08:37.617 "data_size": 63488 00:08:37.617 }, 00:08:37.617 { 00:08:37.617 "name": "BaseBdev3", 00:08:37.617 "uuid": "50d45bf4-8815-40d9-a48d-be717a097cc7", 00:08:37.617 "is_configured": true, 00:08:37.617 "data_offset": 2048, 00:08:37.617 "data_size": 63488 00:08:37.617 } 00:08:37.617 ] 00:08:37.617 }' 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.617 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.876 [2024-11-27 21:41:00.987414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.876 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.141 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.141 21:41:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.141 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.141 21:41:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.141 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.141 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.141 "name": "Existed_Raid", 00:08:38.141 "uuid": "397e1967-5bf9-4313-98f6-57a246b184d5", 00:08:38.141 "strip_size_kb": 0, 00:08:38.141 "state": "configuring", 00:08:38.141 "raid_level": "raid1", 00:08:38.141 "superblock": true, 00:08:38.141 "num_base_bdevs": 3, 00:08:38.141 "num_base_bdevs_discovered": 2, 00:08:38.141 "num_base_bdevs_operational": 3, 00:08:38.141 "base_bdevs_list": [ 00:08:38.141 { 00:08:38.141 "name": null, 00:08:38.141 "uuid": "2654e6a6-5fe4-4b4a-8d4b-fbec8b8cf880", 00:08:38.141 "is_configured": false, 00:08:38.141 "data_offset": 0, 00:08:38.141 "data_size": 63488 00:08:38.141 }, 00:08:38.141 { 00:08:38.141 "name": "BaseBdev2", 00:08:38.141 "uuid": "80cef721-c5b5-4539-a659-b3d3eba153f2", 00:08:38.141 "is_configured": true, 00:08:38.141 "data_offset": 2048, 00:08:38.141 "data_size": 63488 00:08:38.141 }, 00:08:38.141 { 00:08:38.141 "name": "BaseBdev3", 00:08:38.141 "uuid": "50d45bf4-8815-40d9-a48d-be717a097cc7", 00:08:38.141 "is_configured": true, 00:08:38.141 "data_offset": 2048, 00:08:38.141 "data_size": 63488 00:08:38.141 } 00:08:38.141 ] 00:08:38.141 }' 00:08:38.141 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.141 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.407 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.407 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:38.407 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.407 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.407 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.407 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:38.407 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.407 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.407 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.407 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:38.407 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.666 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2654e6a6-5fe4-4b4a-8d4b-fbec8b8cf880 00:08:38.666 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.666 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.666 [2024-11-27 21:41:01.541255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:38.667 [2024-11-27 21:41:01.541429] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:38.667 [2024-11-27 21:41:01.541441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:38.667 NewBaseBdev 00:08:38.667 [2024-11-27 21:41:01.541684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:08:38.667 [2024-11-27 21:41:01.541829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:38.667 [2024-11-27 21:41:01.541844] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:08:38.667 [2024-11-27 21:41:01.541937] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.667 [ 00:08:38.667 { 00:08:38.667 "name": "NewBaseBdev", 00:08:38.667 "aliases": [ 00:08:38.667 "2654e6a6-5fe4-4b4a-8d4b-fbec8b8cf880" 00:08:38.667 ], 00:08:38.667 "product_name": "Malloc disk", 00:08:38.667 "block_size": 512, 00:08:38.667 "num_blocks": 65536, 00:08:38.667 "uuid": "2654e6a6-5fe4-4b4a-8d4b-fbec8b8cf880", 00:08:38.667 "assigned_rate_limits": { 00:08:38.667 "rw_ios_per_sec": 0, 00:08:38.667 "rw_mbytes_per_sec": 0, 00:08:38.667 "r_mbytes_per_sec": 0, 00:08:38.667 "w_mbytes_per_sec": 0 00:08:38.667 }, 00:08:38.667 "claimed": true, 00:08:38.667 "claim_type": "exclusive_write", 00:08:38.667 "zoned": false, 00:08:38.667 "supported_io_types": { 00:08:38.667 "read": true, 00:08:38.667 "write": true, 00:08:38.667 "unmap": true, 00:08:38.667 "flush": true, 00:08:38.667 "reset": true, 00:08:38.667 "nvme_admin": false, 00:08:38.667 "nvme_io": false, 00:08:38.667 "nvme_io_md": false, 00:08:38.667 "write_zeroes": true, 00:08:38.667 "zcopy": true, 00:08:38.667 "get_zone_info": false, 00:08:38.667 "zone_management": false, 00:08:38.667 "zone_append": false, 00:08:38.667 "compare": false, 00:08:38.667 "compare_and_write": false, 00:08:38.667 "abort": true, 00:08:38.667 "seek_hole": false, 00:08:38.667 "seek_data": false, 00:08:38.667 "copy": true, 00:08:38.667 "nvme_iov_md": false 00:08:38.667 }, 00:08:38.667 "memory_domains": [ 00:08:38.667 { 00:08:38.667 "dma_device_id": "system", 00:08:38.667 "dma_device_type": 1 00:08:38.667 }, 00:08:38.667 { 00:08:38.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.667 "dma_device_type": 2 00:08:38.667 } 00:08:38.667 ], 00:08:38.667 "driver_specific": {} 00:08:38.667 } 00:08:38.667 ] 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.667 "name": "Existed_Raid", 00:08:38.667 "uuid": "397e1967-5bf9-4313-98f6-57a246b184d5", 00:08:38.667 "strip_size_kb": 0, 00:08:38.667 "state": "online", 00:08:38.667 "raid_level": "raid1", 00:08:38.667 "superblock": true, 00:08:38.667 "num_base_bdevs": 3, 00:08:38.667 "num_base_bdevs_discovered": 3, 00:08:38.667 "num_base_bdevs_operational": 3, 00:08:38.667 "base_bdevs_list": [ 00:08:38.667 { 00:08:38.667 "name": "NewBaseBdev", 00:08:38.667 "uuid": "2654e6a6-5fe4-4b4a-8d4b-fbec8b8cf880", 00:08:38.667 "is_configured": true, 00:08:38.667 "data_offset": 2048, 00:08:38.667 "data_size": 63488 00:08:38.667 }, 00:08:38.667 { 00:08:38.667 "name": "BaseBdev2", 00:08:38.667 "uuid": "80cef721-c5b5-4539-a659-b3d3eba153f2", 00:08:38.667 "is_configured": true, 00:08:38.667 "data_offset": 2048, 00:08:38.667 "data_size": 63488 00:08:38.667 }, 00:08:38.667 { 00:08:38.667 "name": "BaseBdev3", 00:08:38.667 "uuid": "50d45bf4-8815-40d9-a48d-be717a097cc7", 00:08:38.667 "is_configured": true, 00:08:38.667 "data_offset": 2048, 00:08:38.667 "data_size": 63488 00:08:38.667 } 00:08:38.667 ] 00:08:38.667 }' 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.667 21:41:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:38.927 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:38.927 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:38.927 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:38.927 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:38.927 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:38.927 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:38.927 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:38.927 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:38.927 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.927 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.186 [2024-11-27 21:41:02.052710] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.186 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.186 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:39.186 "name": "Existed_Raid", 00:08:39.186 "aliases": [ 00:08:39.186 "397e1967-5bf9-4313-98f6-57a246b184d5" 00:08:39.186 ], 00:08:39.186 "product_name": "Raid Volume", 00:08:39.186 "block_size": 512, 00:08:39.186 "num_blocks": 63488, 00:08:39.186 "uuid": "397e1967-5bf9-4313-98f6-57a246b184d5", 00:08:39.186 "assigned_rate_limits": { 00:08:39.186 "rw_ios_per_sec": 0, 00:08:39.186 "rw_mbytes_per_sec": 0, 00:08:39.186 "r_mbytes_per_sec": 0, 00:08:39.186 "w_mbytes_per_sec": 0 00:08:39.186 }, 00:08:39.186 "claimed": false, 00:08:39.186 "zoned": false, 00:08:39.186 "supported_io_types": { 00:08:39.186 "read": true, 00:08:39.186 "write": true, 00:08:39.186 "unmap": false, 00:08:39.186 "flush": false, 00:08:39.186 "reset": true, 00:08:39.186 "nvme_admin": false, 00:08:39.186 "nvme_io": false, 00:08:39.186 "nvme_io_md": false, 00:08:39.186 "write_zeroes": true, 00:08:39.186 "zcopy": false, 00:08:39.186 "get_zone_info": false, 00:08:39.186 "zone_management": false, 00:08:39.186 "zone_append": false, 00:08:39.186 "compare": false, 00:08:39.186 "compare_and_write": false, 00:08:39.186 "abort": false, 00:08:39.186 "seek_hole": false, 00:08:39.186 "seek_data": false, 00:08:39.186 "copy": false, 00:08:39.186 "nvme_iov_md": false 00:08:39.186 }, 00:08:39.186 "memory_domains": [ 00:08:39.186 { 00:08:39.186 "dma_device_id": "system", 00:08:39.186 "dma_device_type": 1 00:08:39.186 }, 00:08:39.186 { 00:08:39.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.186 "dma_device_type": 2 00:08:39.186 }, 00:08:39.186 { 00:08:39.186 "dma_device_id": "system", 00:08:39.186 "dma_device_type": 1 00:08:39.186 }, 00:08:39.186 { 00:08:39.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.186 "dma_device_type": 2 00:08:39.186 }, 00:08:39.186 { 00:08:39.186 "dma_device_id": "system", 00:08:39.186 "dma_device_type": 1 00:08:39.186 }, 00:08:39.186 { 00:08:39.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.186 "dma_device_type": 2 00:08:39.186 } 00:08:39.186 ], 00:08:39.186 "driver_specific": { 00:08:39.186 "raid": { 00:08:39.186 "uuid": "397e1967-5bf9-4313-98f6-57a246b184d5", 00:08:39.186 "strip_size_kb": 0, 00:08:39.186 "state": "online", 00:08:39.186 "raid_level": "raid1", 00:08:39.186 "superblock": true, 00:08:39.186 "num_base_bdevs": 3, 00:08:39.186 "num_base_bdevs_discovered": 3, 00:08:39.186 "num_base_bdevs_operational": 3, 00:08:39.186 "base_bdevs_list": [ 00:08:39.186 { 00:08:39.186 "name": "NewBaseBdev", 00:08:39.186 "uuid": "2654e6a6-5fe4-4b4a-8d4b-fbec8b8cf880", 00:08:39.186 "is_configured": true, 00:08:39.186 "data_offset": 2048, 00:08:39.186 "data_size": 63488 00:08:39.186 }, 00:08:39.186 { 00:08:39.186 "name": "BaseBdev2", 00:08:39.186 "uuid": "80cef721-c5b5-4539-a659-b3d3eba153f2", 00:08:39.186 "is_configured": true, 00:08:39.186 "data_offset": 2048, 00:08:39.186 "data_size": 63488 00:08:39.186 }, 00:08:39.186 { 00:08:39.186 "name": "BaseBdev3", 00:08:39.186 "uuid": "50d45bf4-8815-40d9-a48d-be717a097cc7", 00:08:39.186 "is_configured": true, 00:08:39.186 "data_offset": 2048, 00:08:39.187 "data_size": 63488 00:08:39.187 } 00:08:39.187 ] 00:08:39.187 } 00:08:39.187 } 00:08:39.187 }' 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:39.187 BaseBdev2 00:08:39.187 BaseBdev3' 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.187 [2024-11-27 21:41:02.272069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.187 [2024-11-27 21:41:02.272148] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.187 [2024-11-27 21:41:02.272242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.187 [2024-11-27 21:41:02.272513] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:39.187 [2024-11-27 21:41:02.272526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 78785 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78785 ']' 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 78785 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.187 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78785 00:08:39.445 killing process with pid 78785 00:08:39.445 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.445 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.445 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78785' 00:08:39.446 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 78785 00:08:39.446 [2024-11-27 21:41:02.316731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.446 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 78785 00:08:39.446 [2024-11-27 21:41:02.347212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.446 21:41:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:39.446 00:08:39.446 real 0m8.556s 00:08:39.446 user 0m14.684s 00:08:39.446 sys 0m1.661s 00:08:39.446 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.446 ************************************ 00:08:39.446 END TEST raid_state_function_test_sb 00:08:39.446 ************************************ 00:08:39.446 21:41:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.704 21:41:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:08:39.704 21:41:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:39.704 21:41:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.704 21:41:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:39.704 ************************************ 00:08:39.704 START TEST raid_superblock_test 00:08:39.704 ************************************ 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=79389 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 79389 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 79389 ']' 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.704 21:41:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.704 [2024-11-27 21:41:02.716127] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:39.705 [2024-11-27 21:41:02.716335] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79389 ] 00:08:39.963 [2024-11-27 21:41:02.867228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.963 [2024-11-27 21:41:02.891457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.963 [2024-11-27 21:41:02.933034] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.963 [2024-11-27 21:41:02.933148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.533 malloc1 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.533 [2024-11-27 21:41:03.560346] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:40.533 [2024-11-27 21:41:03.560465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.533 [2024-11-27 21:41:03.560502] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:08:40.533 [2024-11-27 21:41:03.560552] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.533 [2024-11-27 21:41:03.562652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.533 [2024-11-27 21:41:03.562721] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:40.533 pt1 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.533 malloc2 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.533 [2024-11-27 21:41:03.588680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:40.533 [2024-11-27 21:41:03.588786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.533 [2024-11-27 21:41:03.588833] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:40.533 [2024-11-27 21:41:03.588871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.533 [2024-11-27 21:41:03.590947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.533 [2024-11-27 21:41:03.591011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:40.533 pt2 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.533 malloc3 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.533 [2024-11-27 21:41:03.620986] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:40.533 [2024-11-27 21:41:03.621037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:40.533 [2024-11-27 21:41:03.621055] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:40.533 [2024-11-27 21:41:03.621065] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:40.533 [2024-11-27 21:41:03.623094] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:40.533 [2024-11-27 21:41:03.623131] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:40.533 pt3 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.533 [2024-11-27 21:41:03.633016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:40.533 [2024-11-27 21:41:03.634817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:40.533 [2024-11-27 21:41:03.634883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:40.533 [2024-11-27 21:41:03.635024] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:08:40.533 [2024-11-27 21:41:03.635035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:40.533 [2024-11-27 21:41:03.635297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:08:40.533 [2024-11-27 21:41:03.635432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:08:40.533 [2024-11-27 21:41:03.635450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:08:40.533 [2024-11-27 21:41:03.635567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.533 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.793 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.793 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.793 "name": "raid_bdev1", 00:08:40.793 "uuid": "875e529e-199f-4dd6-9f27-c69a522e4a7b", 00:08:40.793 "strip_size_kb": 0, 00:08:40.793 "state": "online", 00:08:40.793 "raid_level": "raid1", 00:08:40.793 "superblock": true, 00:08:40.793 "num_base_bdevs": 3, 00:08:40.793 "num_base_bdevs_discovered": 3, 00:08:40.793 "num_base_bdevs_operational": 3, 00:08:40.793 "base_bdevs_list": [ 00:08:40.793 { 00:08:40.793 "name": "pt1", 00:08:40.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:40.793 "is_configured": true, 00:08:40.793 "data_offset": 2048, 00:08:40.793 "data_size": 63488 00:08:40.793 }, 00:08:40.793 { 00:08:40.793 "name": "pt2", 00:08:40.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.793 "is_configured": true, 00:08:40.793 "data_offset": 2048, 00:08:40.793 "data_size": 63488 00:08:40.793 }, 00:08:40.793 { 00:08:40.793 "name": "pt3", 00:08:40.793 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:40.793 "is_configured": true, 00:08:40.793 "data_offset": 2048, 00:08:40.793 "data_size": 63488 00:08:40.793 } 00:08:40.793 ] 00:08:40.793 }' 00:08:40.793 21:41:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.793 21:41:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.053 [2024-11-27 21:41:04.076665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:41.053 "name": "raid_bdev1", 00:08:41.053 "aliases": [ 00:08:41.053 "875e529e-199f-4dd6-9f27-c69a522e4a7b" 00:08:41.053 ], 00:08:41.053 "product_name": "Raid Volume", 00:08:41.053 "block_size": 512, 00:08:41.053 "num_blocks": 63488, 00:08:41.053 "uuid": "875e529e-199f-4dd6-9f27-c69a522e4a7b", 00:08:41.053 "assigned_rate_limits": { 00:08:41.053 "rw_ios_per_sec": 0, 00:08:41.053 "rw_mbytes_per_sec": 0, 00:08:41.053 "r_mbytes_per_sec": 0, 00:08:41.053 "w_mbytes_per_sec": 0 00:08:41.053 }, 00:08:41.053 "claimed": false, 00:08:41.053 "zoned": false, 00:08:41.053 "supported_io_types": { 00:08:41.053 "read": true, 00:08:41.053 "write": true, 00:08:41.053 "unmap": false, 00:08:41.053 "flush": false, 00:08:41.053 "reset": true, 00:08:41.053 "nvme_admin": false, 00:08:41.053 "nvme_io": false, 00:08:41.053 "nvme_io_md": false, 00:08:41.053 "write_zeroes": true, 00:08:41.053 "zcopy": false, 00:08:41.053 "get_zone_info": false, 00:08:41.053 "zone_management": false, 00:08:41.053 "zone_append": false, 00:08:41.053 "compare": false, 00:08:41.053 "compare_and_write": false, 00:08:41.053 "abort": false, 00:08:41.053 "seek_hole": false, 00:08:41.053 "seek_data": false, 00:08:41.053 "copy": false, 00:08:41.053 "nvme_iov_md": false 00:08:41.053 }, 00:08:41.053 "memory_domains": [ 00:08:41.053 { 00:08:41.053 "dma_device_id": "system", 00:08:41.053 "dma_device_type": 1 00:08:41.053 }, 00:08:41.053 { 00:08:41.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.053 "dma_device_type": 2 00:08:41.053 }, 00:08:41.053 { 00:08:41.053 "dma_device_id": "system", 00:08:41.053 "dma_device_type": 1 00:08:41.053 }, 00:08:41.053 { 00:08:41.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.053 "dma_device_type": 2 00:08:41.053 }, 00:08:41.053 { 00:08:41.053 "dma_device_id": "system", 00:08:41.053 "dma_device_type": 1 00:08:41.053 }, 00:08:41.053 { 00:08:41.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.053 "dma_device_type": 2 00:08:41.053 } 00:08:41.053 ], 00:08:41.053 "driver_specific": { 00:08:41.053 "raid": { 00:08:41.053 "uuid": "875e529e-199f-4dd6-9f27-c69a522e4a7b", 00:08:41.053 "strip_size_kb": 0, 00:08:41.053 "state": "online", 00:08:41.053 "raid_level": "raid1", 00:08:41.053 "superblock": true, 00:08:41.053 "num_base_bdevs": 3, 00:08:41.053 "num_base_bdevs_discovered": 3, 00:08:41.053 "num_base_bdevs_operational": 3, 00:08:41.053 "base_bdevs_list": [ 00:08:41.053 { 00:08:41.053 "name": "pt1", 00:08:41.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.053 "is_configured": true, 00:08:41.053 "data_offset": 2048, 00:08:41.053 "data_size": 63488 00:08:41.053 }, 00:08:41.053 { 00:08:41.053 "name": "pt2", 00:08:41.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.053 "is_configured": true, 00:08:41.053 "data_offset": 2048, 00:08:41.053 "data_size": 63488 00:08:41.053 }, 00:08:41.053 { 00:08:41.053 "name": "pt3", 00:08:41.053 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:41.053 "is_configured": true, 00:08:41.053 "data_offset": 2048, 00:08:41.053 "data_size": 63488 00:08:41.053 } 00:08:41.053 ] 00:08:41.053 } 00:08:41.053 } 00:08:41.053 }' 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:41.053 pt2 00:08:41.053 pt3' 00:08:41.053 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.313 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:41.313 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.313 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:41.313 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.313 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.313 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.314 [2024-11-27 21:41:04.324068] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=875e529e-199f-4dd6-9f27-c69a522e4a7b 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 875e529e-199f-4dd6-9f27-c69a522e4a7b ']' 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.314 [2024-11-27 21:41:04.371731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.314 [2024-11-27 21:41:04.371814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.314 [2024-11-27 21:41:04.371905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.314 [2024-11-27 21:41:04.371997] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.314 [2024-11-27 21:41:04.372011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.314 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.574 [2024-11-27 21:41:04.507510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:41.574 [2024-11-27 21:41:04.509441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:41.574 [2024-11-27 21:41:04.509487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:41.574 [2024-11-27 21:41:04.509546] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:41.574 [2024-11-27 21:41:04.509612] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:41.574 [2024-11-27 21:41:04.509632] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:41.574 [2024-11-27 21:41:04.509644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.574 [2024-11-27 21:41:04.509654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:08:41.574 request: 00:08:41.574 { 00:08:41.574 "name": "raid_bdev1", 00:08:41.574 "raid_level": "raid1", 00:08:41.574 "base_bdevs": [ 00:08:41.574 "malloc1", 00:08:41.574 "malloc2", 00:08:41.574 "malloc3" 00:08:41.574 ], 00:08:41.574 "superblock": false, 00:08:41.574 "method": "bdev_raid_create", 00:08:41.574 "req_id": 1 00:08:41.574 } 00:08:41.574 Got JSON-RPC error response 00:08:41.574 response: 00:08:41.574 { 00:08:41.574 "code": -17, 00:08:41.574 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:41.574 } 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.574 [2024-11-27 21:41:04.575373] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:41.574 [2024-11-27 21:41:04.575466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.574 [2024-11-27 21:41:04.575503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:41.574 [2024-11-27 21:41:04.575532] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.574 [2024-11-27 21:41:04.577715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.574 [2024-11-27 21:41:04.577786] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:41.574 [2024-11-27 21:41:04.577903] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:41.574 [2024-11-27 21:41:04.577968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:41.574 pt1 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.574 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.574 "name": "raid_bdev1", 00:08:41.574 "uuid": "875e529e-199f-4dd6-9f27-c69a522e4a7b", 00:08:41.574 "strip_size_kb": 0, 00:08:41.574 "state": "configuring", 00:08:41.574 "raid_level": "raid1", 00:08:41.574 "superblock": true, 00:08:41.574 "num_base_bdevs": 3, 00:08:41.574 "num_base_bdevs_discovered": 1, 00:08:41.574 "num_base_bdevs_operational": 3, 00:08:41.575 "base_bdevs_list": [ 00:08:41.575 { 00:08:41.575 "name": "pt1", 00:08:41.575 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:41.575 "is_configured": true, 00:08:41.575 "data_offset": 2048, 00:08:41.575 "data_size": 63488 00:08:41.575 }, 00:08:41.575 { 00:08:41.575 "name": null, 00:08:41.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.575 "is_configured": false, 00:08:41.575 "data_offset": 2048, 00:08:41.575 "data_size": 63488 00:08:41.575 }, 00:08:41.575 { 00:08:41.575 "name": null, 00:08:41.575 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:41.575 "is_configured": false, 00:08:41.575 "data_offset": 2048, 00:08:41.575 "data_size": 63488 00:08:41.575 } 00:08:41.575 ] 00:08:41.575 }' 00:08:41.575 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.575 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.144 [2024-11-27 21:41:04.974687] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:42.144 [2024-11-27 21:41:04.974787] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.144 [2024-11-27 21:41:04.974819] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:42.144 [2024-11-27 21:41:04.974831] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.144 [2024-11-27 21:41:04.975199] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.144 [2024-11-27 21:41:04.975219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:42.144 [2024-11-27 21:41:04.975279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:42.144 [2024-11-27 21:41:04.975308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:42.144 pt2 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.144 [2024-11-27 21:41:04.986681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.144 21:41:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.144 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.144 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.144 "name": "raid_bdev1", 00:08:42.144 "uuid": "875e529e-199f-4dd6-9f27-c69a522e4a7b", 00:08:42.144 "strip_size_kb": 0, 00:08:42.144 "state": "configuring", 00:08:42.144 "raid_level": "raid1", 00:08:42.144 "superblock": true, 00:08:42.144 "num_base_bdevs": 3, 00:08:42.144 "num_base_bdevs_discovered": 1, 00:08:42.144 "num_base_bdevs_operational": 3, 00:08:42.144 "base_bdevs_list": [ 00:08:42.144 { 00:08:42.144 "name": "pt1", 00:08:42.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.144 "is_configured": true, 00:08:42.145 "data_offset": 2048, 00:08:42.145 "data_size": 63488 00:08:42.145 }, 00:08:42.145 { 00:08:42.145 "name": null, 00:08:42.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.145 "is_configured": false, 00:08:42.145 "data_offset": 0, 00:08:42.145 "data_size": 63488 00:08:42.145 }, 00:08:42.145 { 00:08:42.145 "name": null, 00:08:42.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.145 "is_configured": false, 00:08:42.145 "data_offset": 2048, 00:08:42.145 "data_size": 63488 00:08:42.145 } 00:08:42.145 ] 00:08:42.145 }' 00:08:42.145 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.145 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.405 [2024-11-27 21:41:05.405937] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:42.405 [2024-11-27 21:41:05.406031] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.405 [2024-11-27 21:41:05.406069] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:42.405 [2024-11-27 21:41:05.406095] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.405 [2024-11-27 21:41:05.406541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.405 [2024-11-27 21:41:05.406600] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:42.405 [2024-11-27 21:41:05.406715] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:42.405 [2024-11-27 21:41:05.406765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:42.405 pt2 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.405 [2024-11-27 21:41:05.417912] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:42.405 [2024-11-27 21:41:05.417998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.405 [2024-11-27 21:41:05.418030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:42.405 [2024-11-27 21:41:05.418055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.405 [2024-11-27 21:41:05.418393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.405 [2024-11-27 21:41:05.418448] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:42.405 [2024-11-27 21:41:05.418539] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:42.405 [2024-11-27 21:41:05.418582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:42.405 [2024-11-27 21:41:05.418716] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:42.405 [2024-11-27 21:41:05.418755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:42.405 [2024-11-27 21:41:05.419016] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:42.405 [2024-11-27 21:41:05.419168] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:42.405 [2024-11-27 21:41:05.419209] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:08:42.405 [2024-11-27 21:41:05.419370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.405 pt3 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.405 "name": "raid_bdev1", 00:08:42.405 "uuid": "875e529e-199f-4dd6-9f27-c69a522e4a7b", 00:08:42.405 "strip_size_kb": 0, 00:08:42.405 "state": "online", 00:08:42.405 "raid_level": "raid1", 00:08:42.405 "superblock": true, 00:08:42.405 "num_base_bdevs": 3, 00:08:42.405 "num_base_bdevs_discovered": 3, 00:08:42.405 "num_base_bdevs_operational": 3, 00:08:42.405 "base_bdevs_list": [ 00:08:42.405 { 00:08:42.405 "name": "pt1", 00:08:42.405 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.405 "is_configured": true, 00:08:42.405 "data_offset": 2048, 00:08:42.405 "data_size": 63488 00:08:42.405 }, 00:08:42.405 { 00:08:42.405 "name": "pt2", 00:08:42.405 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.405 "is_configured": true, 00:08:42.405 "data_offset": 2048, 00:08:42.405 "data_size": 63488 00:08:42.405 }, 00:08:42.405 { 00:08:42.405 "name": "pt3", 00:08:42.405 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.405 "is_configured": true, 00:08:42.405 "data_offset": 2048, 00:08:42.405 "data_size": 63488 00:08:42.405 } 00:08:42.405 ] 00:08:42.405 }' 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.405 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.975 [2024-11-27 21:41:05.897386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.975 "name": "raid_bdev1", 00:08:42.975 "aliases": [ 00:08:42.975 "875e529e-199f-4dd6-9f27-c69a522e4a7b" 00:08:42.975 ], 00:08:42.975 "product_name": "Raid Volume", 00:08:42.975 "block_size": 512, 00:08:42.975 "num_blocks": 63488, 00:08:42.975 "uuid": "875e529e-199f-4dd6-9f27-c69a522e4a7b", 00:08:42.975 "assigned_rate_limits": { 00:08:42.975 "rw_ios_per_sec": 0, 00:08:42.975 "rw_mbytes_per_sec": 0, 00:08:42.975 "r_mbytes_per_sec": 0, 00:08:42.975 "w_mbytes_per_sec": 0 00:08:42.975 }, 00:08:42.975 "claimed": false, 00:08:42.975 "zoned": false, 00:08:42.975 "supported_io_types": { 00:08:42.975 "read": true, 00:08:42.975 "write": true, 00:08:42.975 "unmap": false, 00:08:42.975 "flush": false, 00:08:42.975 "reset": true, 00:08:42.975 "nvme_admin": false, 00:08:42.975 "nvme_io": false, 00:08:42.975 "nvme_io_md": false, 00:08:42.975 "write_zeroes": true, 00:08:42.975 "zcopy": false, 00:08:42.975 "get_zone_info": false, 00:08:42.975 "zone_management": false, 00:08:42.975 "zone_append": false, 00:08:42.975 "compare": false, 00:08:42.975 "compare_and_write": false, 00:08:42.975 "abort": false, 00:08:42.975 "seek_hole": false, 00:08:42.975 "seek_data": false, 00:08:42.975 "copy": false, 00:08:42.975 "nvme_iov_md": false 00:08:42.975 }, 00:08:42.975 "memory_domains": [ 00:08:42.975 { 00:08:42.975 "dma_device_id": "system", 00:08:42.975 "dma_device_type": 1 00:08:42.975 }, 00:08:42.975 { 00:08:42.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.975 "dma_device_type": 2 00:08:42.975 }, 00:08:42.975 { 00:08:42.975 "dma_device_id": "system", 00:08:42.975 "dma_device_type": 1 00:08:42.975 }, 00:08:42.975 { 00:08:42.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.975 "dma_device_type": 2 00:08:42.975 }, 00:08:42.975 { 00:08:42.975 "dma_device_id": "system", 00:08:42.975 "dma_device_type": 1 00:08:42.975 }, 00:08:42.975 { 00:08:42.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.975 "dma_device_type": 2 00:08:42.975 } 00:08:42.975 ], 00:08:42.975 "driver_specific": { 00:08:42.975 "raid": { 00:08:42.975 "uuid": "875e529e-199f-4dd6-9f27-c69a522e4a7b", 00:08:42.975 "strip_size_kb": 0, 00:08:42.975 "state": "online", 00:08:42.975 "raid_level": "raid1", 00:08:42.975 "superblock": true, 00:08:42.975 "num_base_bdevs": 3, 00:08:42.975 "num_base_bdevs_discovered": 3, 00:08:42.975 "num_base_bdevs_operational": 3, 00:08:42.975 "base_bdevs_list": [ 00:08:42.975 { 00:08:42.975 "name": "pt1", 00:08:42.975 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:42.975 "is_configured": true, 00:08:42.975 "data_offset": 2048, 00:08:42.975 "data_size": 63488 00:08:42.975 }, 00:08:42.975 { 00:08:42.975 "name": "pt2", 00:08:42.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.975 "is_configured": true, 00:08:42.975 "data_offset": 2048, 00:08:42.975 "data_size": 63488 00:08:42.975 }, 00:08:42.975 { 00:08:42.975 "name": "pt3", 00:08:42.975 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:42.975 "is_configured": true, 00:08:42.975 "data_offset": 2048, 00:08:42.975 "data_size": 63488 00:08:42.975 } 00:08:42.975 ] 00:08:42.975 } 00:08:42.975 } 00:08:42.975 }' 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:42.975 pt2 00:08:42.975 pt3' 00:08:42.975 21:41:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.975 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.975 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.976 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:42.976 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.976 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.976 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.976 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.976 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.976 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.976 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.976 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:42.976 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.976 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.976 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.235 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.235 [2024-11-27 21:41:06.164875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 875e529e-199f-4dd6-9f27-c69a522e4a7b '!=' 875e529e-199f-4dd6-9f27-c69a522e4a7b ']' 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.236 [2024-11-27 21:41:06.212623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.236 "name": "raid_bdev1", 00:08:43.236 "uuid": "875e529e-199f-4dd6-9f27-c69a522e4a7b", 00:08:43.236 "strip_size_kb": 0, 00:08:43.236 "state": "online", 00:08:43.236 "raid_level": "raid1", 00:08:43.236 "superblock": true, 00:08:43.236 "num_base_bdevs": 3, 00:08:43.236 "num_base_bdevs_discovered": 2, 00:08:43.236 "num_base_bdevs_operational": 2, 00:08:43.236 "base_bdevs_list": [ 00:08:43.236 { 00:08:43.236 "name": null, 00:08:43.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.236 "is_configured": false, 00:08:43.236 "data_offset": 0, 00:08:43.236 "data_size": 63488 00:08:43.236 }, 00:08:43.236 { 00:08:43.236 "name": "pt2", 00:08:43.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.236 "is_configured": true, 00:08:43.236 "data_offset": 2048, 00:08:43.236 "data_size": 63488 00:08:43.236 }, 00:08:43.236 { 00:08:43.236 "name": "pt3", 00:08:43.236 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.236 "is_configured": true, 00:08:43.236 "data_offset": 2048, 00:08:43.236 "data_size": 63488 00:08:43.236 } 00:08:43.236 ] 00:08:43.236 }' 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.236 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.806 [2024-11-27 21:41:06.643820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:43.806 [2024-11-27 21:41:06.643888] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.806 [2024-11-27 21:41:06.643973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.806 [2024-11-27 21:41:06.644073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.806 [2024-11-27 21:41:06.644128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.806 [2024-11-27 21:41:06.719659] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:43.806 [2024-11-27 21:41:06.719705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:43.806 [2024-11-27 21:41:06.719723] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:43.806 [2024-11-27 21:41:06.719731] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:43.806 [2024-11-27 21:41:06.721871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:43.806 [2024-11-27 21:41:06.721902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:43.806 [2024-11-27 21:41:06.721966] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:43.806 [2024-11-27 21:41:06.721997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:43.806 pt2 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.806 "name": "raid_bdev1", 00:08:43.806 "uuid": "875e529e-199f-4dd6-9f27-c69a522e4a7b", 00:08:43.806 "strip_size_kb": 0, 00:08:43.806 "state": "configuring", 00:08:43.806 "raid_level": "raid1", 00:08:43.806 "superblock": true, 00:08:43.806 "num_base_bdevs": 3, 00:08:43.806 "num_base_bdevs_discovered": 1, 00:08:43.806 "num_base_bdevs_operational": 2, 00:08:43.806 "base_bdevs_list": [ 00:08:43.806 { 00:08:43.806 "name": null, 00:08:43.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.806 "is_configured": false, 00:08:43.806 "data_offset": 2048, 00:08:43.806 "data_size": 63488 00:08:43.806 }, 00:08:43.806 { 00:08:43.806 "name": "pt2", 00:08:43.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:43.806 "is_configured": true, 00:08:43.806 "data_offset": 2048, 00:08:43.806 "data_size": 63488 00:08:43.806 }, 00:08:43.806 { 00:08:43.806 "name": null, 00:08:43.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:43.806 "is_configured": false, 00:08:43.806 "data_offset": 2048, 00:08:43.806 "data_size": 63488 00:08:43.806 } 00:08:43.806 ] 00:08:43.806 }' 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.806 21:41:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.066 [2024-11-27 21:41:07.154944] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:44.066 [2024-11-27 21:41:07.155005] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.066 [2024-11-27 21:41:07.155028] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:44.066 [2024-11-27 21:41:07.155037] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.066 [2024-11-27 21:41:07.155404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.066 [2024-11-27 21:41:07.155420] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:44.066 [2024-11-27 21:41:07.155487] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:44.066 [2024-11-27 21:41:07.155507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:44.066 [2024-11-27 21:41:07.155592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:44.066 [2024-11-27 21:41:07.155599] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:44.066 [2024-11-27 21:41:07.155842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:08:44.066 [2024-11-27 21:41:07.155979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:44.066 [2024-11-27 21:41:07.155997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:44.066 [2024-11-27 21:41:07.156145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.066 pt3 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.066 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.326 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.326 "name": "raid_bdev1", 00:08:44.326 "uuid": "875e529e-199f-4dd6-9f27-c69a522e4a7b", 00:08:44.326 "strip_size_kb": 0, 00:08:44.326 "state": "online", 00:08:44.326 "raid_level": "raid1", 00:08:44.326 "superblock": true, 00:08:44.326 "num_base_bdevs": 3, 00:08:44.326 "num_base_bdevs_discovered": 2, 00:08:44.326 "num_base_bdevs_operational": 2, 00:08:44.326 "base_bdevs_list": [ 00:08:44.326 { 00:08:44.326 "name": null, 00:08:44.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.326 "is_configured": false, 00:08:44.326 "data_offset": 2048, 00:08:44.326 "data_size": 63488 00:08:44.326 }, 00:08:44.326 { 00:08:44.326 "name": "pt2", 00:08:44.326 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.326 "is_configured": true, 00:08:44.326 "data_offset": 2048, 00:08:44.326 "data_size": 63488 00:08:44.326 }, 00:08:44.326 { 00:08:44.326 "name": "pt3", 00:08:44.326 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.326 "is_configured": true, 00:08:44.326 "data_offset": 2048, 00:08:44.326 "data_size": 63488 00:08:44.326 } 00:08:44.326 ] 00:08:44.326 }' 00:08:44.326 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.326 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.586 [2024-11-27 21:41:07.598215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.586 [2024-11-27 21:41:07.598298] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.586 [2024-11-27 21:41:07.598412] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.586 [2024-11-27 21:41:07.598505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.586 [2024-11-27 21:41:07.598562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.586 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.586 [2024-11-27 21:41:07.670072] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:44.586 [2024-11-27 21:41:07.670165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:44.586 [2024-11-27 21:41:07.670207] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:44.586 [2024-11-27 21:41:07.670237] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:44.586 [2024-11-27 21:41:07.672340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:44.586 [2024-11-27 21:41:07.672410] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:44.586 [2024-11-27 21:41:07.672503] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:44.586 [2024-11-27 21:41:07.672585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:44.586 [2024-11-27 21:41:07.672734] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:44.586 [2024-11-27 21:41:07.672791] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.586 [2024-11-27 21:41:07.672869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:08:44.587 [2024-11-27 21:41:07.672961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:44.587 pt1 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.587 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.846 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.846 "name": "raid_bdev1", 00:08:44.846 "uuid": "875e529e-199f-4dd6-9f27-c69a522e4a7b", 00:08:44.846 "strip_size_kb": 0, 00:08:44.846 "state": "configuring", 00:08:44.846 "raid_level": "raid1", 00:08:44.846 "superblock": true, 00:08:44.846 "num_base_bdevs": 3, 00:08:44.846 "num_base_bdevs_discovered": 1, 00:08:44.846 "num_base_bdevs_operational": 2, 00:08:44.846 "base_bdevs_list": [ 00:08:44.846 { 00:08:44.846 "name": null, 00:08:44.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.846 "is_configured": false, 00:08:44.846 "data_offset": 2048, 00:08:44.846 "data_size": 63488 00:08:44.846 }, 00:08:44.846 { 00:08:44.846 "name": "pt2", 00:08:44.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:44.847 "is_configured": true, 00:08:44.847 "data_offset": 2048, 00:08:44.847 "data_size": 63488 00:08:44.847 }, 00:08:44.847 { 00:08:44.847 "name": null, 00:08:44.847 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:44.847 "is_configured": false, 00:08:44.847 "data_offset": 2048, 00:08:44.847 "data_size": 63488 00:08:44.847 } 00:08:44.847 ] 00:08:44.847 }' 00:08:44.847 21:41:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.847 21:41:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.126 [2024-11-27 21:41:08.193162] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:45.126 [2024-11-27 21:41:08.193224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.126 [2024-11-27 21:41:08.193244] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:08:45.126 [2024-11-27 21:41:08.193254] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.126 [2024-11-27 21:41:08.193643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.126 [2024-11-27 21:41:08.193665] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:45.126 [2024-11-27 21:41:08.193734] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:45.126 [2024-11-27 21:41:08.193762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:45.126 [2024-11-27 21:41:08.193868] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:08:45.126 [2024-11-27 21:41:08.193880] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:45.126 [2024-11-27 21:41:08.194122] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:08:45.126 [2024-11-27 21:41:08.194243] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:08:45.126 [2024-11-27 21:41:08.194307] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:08:45.126 [2024-11-27 21:41:08.194421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:45.126 pt3 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.126 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.127 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.127 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:45.127 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.127 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.127 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.402 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.402 "name": "raid_bdev1", 00:08:45.402 "uuid": "875e529e-199f-4dd6-9f27-c69a522e4a7b", 00:08:45.402 "strip_size_kb": 0, 00:08:45.402 "state": "online", 00:08:45.402 "raid_level": "raid1", 00:08:45.402 "superblock": true, 00:08:45.402 "num_base_bdevs": 3, 00:08:45.402 "num_base_bdevs_discovered": 2, 00:08:45.402 "num_base_bdevs_operational": 2, 00:08:45.402 "base_bdevs_list": [ 00:08:45.402 { 00:08:45.402 "name": null, 00:08:45.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.402 "is_configured": false, 00:08:45.402 "data_offset": 2048, 00:08:45.402 "data_size": 63488 00:08:45.402 }, 00:08:45.402 { 00:08:45.402 "name": "pt2", 00:08:45.402 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:45.402 "is_configured": true, 00:08:45.402 "data_offset": 2048, 00:08:45.402 "data_size": 63488 00:08:45.402 }, 00:08:45.402 { 00:08:45.402 "name": "pt3", 00:08:45.402 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:45.402 "is_configured": true, 00:08:45.402 "data_offset": 2048, 00:08:45.402 "data_size": 63488 00:08:45.402 } 00:08:45.402 ] 00:08:45.402 }' 00:08:45.402 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.402 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.662 [2024-11-27 21:41:08.676627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 875e529e-199f-4dd6-9f27-c69a522e4a7b '!=' 875e529e-199f-4dd6-9f27-c69a522e4a7b ']' 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 79389 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 79389 ']' 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 79389 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79389 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79389' 00:08:45.662 killing process with pid 79389 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 79389 00:08:45.662 [2024-11-27 21:41:08.729689] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.662 [2024-11-27 21:41:08.729825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.662 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 79389 00:08:45.662 [2024-11-27 21:41:08.729921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.662 [2024-11-27 21:41:08.729935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:08:45.662 [2024-11-27 21:41:08.763208] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.922 21:41:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:45.922 ************************************ 00:08:45.922 END TEST raid_superblock_test 00:08:45.922 ************************************ 00:08:45.922 00:08:45.922 real 0m6.344s 00:08:45.922 user 0m10.711s 00:08:45.922 sys 0m1.224s 00:08:45.922 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.922 21:41:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.922 21:41:09 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:08:45.922 21:41:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:45.922 21:41:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.922 21:41:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.181 ************************************ 00:08:46.181 START TEST raid_read_error_test 00:08:46.181 ************************************ 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:46.181 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:46.182 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:46.182 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:46.182 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:46.182 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:46.182 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BO4jmOcnLn 00:08:46.182 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79818 00:08:46.182 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:46.182 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79818 00:08:46.182 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 79818 ']' 00:08:46.182 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.182 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.182 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.182 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.182 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.182 [2024-11-27 21:41:09.141658] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:46.182 [2024-11-27 21:41:09.141809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79818 ] 00:08:46.182 [2024-11-27 21:41:09.294482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.440 [2024-11-27 21:41:09.319026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.440 [2024-11-27 21:41:09.360632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.440 [2024-11-27 21:41:09.360666] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.010 BaseBdev1_malloc 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.010 true 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.010 [2024-11-27 21:41:09.991683] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:47.010 [2024-11-27 21:41:09.991734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.010 [2024-11-27 21:41:09.991775] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:47.010 [2024-11-27 21:41:09.991783] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.010 [2024-11-27 21:41:09.994041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.010 [2024-11-27 21:41:09.994075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:47.010 BaseBdev1 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.010 21:41:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.010 BaseBdev2_malloc 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.010 true 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.010 [2024-11-27 21:41:10.032074] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:47.010 [2024-11-27 21:41:10.032138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.010 [2024-11-27 21:41:10.032156] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:47.010 [2024-11-27 21:41:10.032172] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.010 [2024-11-27 21:41:10.034215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.010 [2024-11-27 21:41:10.034250] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:47.010 BaseBdev2 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.010 BaseBdev3_malloc 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.010 true 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.010 [2024-11-27 21:41:10.072483] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:47.010 [2024-11-27 21:41:10.072526] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.010 [2024-11-27 21:41:10.072544] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:47.010 [2024-11-27 21:41:10.072552] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.010 [2024-11-27 21:41:10.074594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.010 [2024-11-27 21:41:10.074627] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:47.010 BaseBdev3 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.010 [2024-11-27 21:41:10.084511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.010 [2024-11-27 21:41:10.086272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.010 [2024-11-27 21:41:10.086404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.010 [2024-11-27 21:41:10.086580] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:47.010 [2024-11-27 21:41:10.086594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:47.010 [2024-11-27 21:41:10.086849] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:08:47.010 [2024-11-27 21:41:10.086995] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:47.010 [2024-11-27 21:41:10.087005] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:47.010 [2024-11-27 21:41:10.087118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.010 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.011 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.011 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.011 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.011 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.011 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.270 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.270 "name": "raid_bdev1", 00:08:47.270 "uuid": "29a452e6-2b00-4664-a6cb-dcf78a81b121", 00:08:47.270 "strip_size_kb": 0, 00:08:47.270 "state": "online", 00:08:47.270 "raid_level": "raid1", 00:08:47.270 "superblock": true, 00:08:47.270 "num_base_bdevs": 3, 00:08:47.270 "num_base_bdevs_discovered": 3, 00:08:47.270 "num_base_bdevs_operational": 3, 00:08:47.270 "base_bdevs_list": [ 00:08:47.270 { 00:08:47.270 "name": "BaseBdev1", 00:08:47.270 "uuid": "a0156d81-f8ae-5ed6-9390-54846b4ba349", 00:08:47.270 "is_configured": true, 00:08:47.270 "data_offset": 2048, 00:08:47.270 "data_size": 63488 00:08:47.270 }, 00:08:47.270 { 00:08:47.270 "name": "BaseBdev2", 00:08:47.270 "uuid": "d751e3f7-7675-568a-9a50-7690930ff426", 00:08:47.270 "is_configured": true, 00:08:47.270 "data_offset": 2048, 00:08:47.270 "data_size": 63488 00:08:47.270 }, 00:08:47.270 { 00:08:47.270 "name": "BaseBdev3", 00:08:47.270 "uuid": "22c125cb-d3b0-5503-8aaf-0190e1dd031f", 00:08:47.270 "is_configured": true, 00:08:47.270 "data_offset": 2048, 00:08:47.270 "data_size": 63488 00:08:47.270 } 00:08:47.270 ] 00:08:47.270 }' 00:08:47.270 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.270 21:41:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.530 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:47.530 21:41:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:47.530 [2024-11-27 21:41:10.624011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.470 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.731 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.731 "name": "raid_bdev1", 00:08:48.731 "uuid": "29a452e6-2b00-4664-a6cb-dcf78a81b121", 00:08:48.731 "strip_size_kb": 0, 00:08:48.731 "state": "online", 00:08:48.731 "raid_level": "raid1", 00:08:48.731 "superblock": true, 00:08:48.731 "num_base_bdevs": 3, 00:08:48.731 "num_base_bdevs_discovered": 3, 00:08:48.731 "num_base_bdevs_operational": 3, 00:08:48.731 "base_bdevs_list": [ 00:08:48.731 { 00:08:48.731 "name": "BaseBdev1", 00:08:48.731 "uuid": "a0156d81-f8ae-5ed6-9390-54846b4ba349", 00:08:48.731 "is_configured": true, 00:08:48.731 "data_offset": 2048, 00:08:48.731 "data_size": 63488 00:08:48.731 }, 00:08:48.731 { 00:08:48.731 "name": "BaseBdev2", 00:08:48.731 "uuid": "d751e3f7-7675-568a-9a50-7690930ff426", 00:08:48.731 "is_configured": true, 00:08:48.731 "data_offset": 2048, 00:08:48.731 "data_size": 63488 00:08:48.731 }, 00:08:48.731 { 00:08:48.731 "name": "BaseBdev3", 00:08:48.731 "uuid": "22c125cb-d3b0-5503-8aaf-0190e1dd031f", 00:08:48.731 "is_configured": true, 00:08:48.731 "data_offset": 2048, 00:08:48.731 "data_size": 63488 00:08:48.731 } 00:08:48.731 ] 00:08:48.731 }' 00:08:48.731 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.731 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.991 [2024-11-27 21:41:11.938946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:48.991 [2024-11-27 21:41:11.938979] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:48.991 [2024-11-27 21:41:11.941557] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.991 [2024-11-27 21:41:11.941647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.991 [2024-11-27 21:41:11.941776] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.991 [2024-11-27 21:41:11.941835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:48.991 { 00:08:48.991 "results": [ 00:08:48.991 { 00:08:48.991 "job": "raid_bdev1", 00:08:48.991 "core_mask": "0x1", 00:08:48.991 "workload": "randrw", 00:08:48.991 "percentage": 50, 00:08:48.991 "status": "finished", 00:08:48.991 "queue_depth": 1, 00:08:48.991 "io_size": 131072, 00:08:48.991 "runtime": 1.315709, 00:08:48.991 "iops": 14564.010734896547, 00:08:48.991 "mibps": 1820.5013418620683, 00:08:48.991 "io_failed": 0, 00:08:48.991 "io_timeout": 0, 00:08:48.991 "avg_latency_us": 66.09474555946562, 00:08:48.991 "min_latency_us": 22.69344978165939, 00:08:48.991 "max_latency_us": 1373.6803493449781 00:08:48.991 } 00:08:48.991 ], 00:08:48.991 "core_count": 1 00:08:48.991 } 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79818 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 79818 ']' 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 79818 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79818 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79818' 00:08:48.991 killing process with pid 79818 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 79818 00:08:48.991 [2024-11-27 21:41:11.977467] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:48.991 21:41:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 79818 00:08:48.991 [2024-11-27 21:41:12.002995] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.251 21:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BO4jmOcnLn 00:08:49.251 21:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:49.251 21:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:49.251 21:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:49.251 21:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:49.251 ************************************ 00:08:49.251 END TEST raid_read_error_test 00:08:49.252 ************************************ 00:08:49.252 21:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.252 21:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:49.252 21:41:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:49.252 00:08:49.252 real 0m3.169s 00:08:49.252 user 0m4.018s 00:08:49.252 sys 0m0.456s 00:08:49.252 21:41:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.252 21:41:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.252 21:41:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:08:49.252 21:41:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:49.252 21:41:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.252 21:41:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.252 ************************************ 00:08:49.252 START TEST raid_write_error_test 00:08:49.252 ************************************ 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CLcspT8obf 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=79947 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 79947 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 79947 ']' 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.252 21:41:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.512 [2024-11-27 21:41:12.387593] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:49.512 [2024-11-27 21:41:12.387784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79947 ] 00:08:49.512 [2024-11-27 21:41:12.540391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.512 [2024-11-27 21:41:12.565459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.512 [2024-11-27 21:41:12.607326] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:49.512 [2024-11-27 21:41:12.607434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.452 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:50.452 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:50.452 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:50.452 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:50.452 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.452 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.452 BaseBdev1_malloc 00:08:50.452 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.452 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:50.452 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.452 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.452 true 00:08:50.452 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.452 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:50.452 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.452 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.452 [2024-11-27 21:41:13.242516] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:50.452 [2024-11-27 21:41:13.242569] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.452 [2024-11-27 21:41:13.242596] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:08:50.452 [2024-11-27 21:41:13.242606] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.453 [2024-11-27 21:41:13.244824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.453 [2024-11-27 21:41:13.244858] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:50.453 BaseBdev1 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.453 BaseBdev2_malloc 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.453 true 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.453 [2024-11-27 21:41:13.282905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:50.453 [2024-11-27 21:41:13.282949] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.453 [2024-11-27 21:41:13.282981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:08:50.453 [2024-11-27 21:41:13.282996] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.453 [2024-11-27 21:41:13.285051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.453 [2024-11-27 21:41:13.285086] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:50.453 BaseBdev2 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.453 BaseBdev3_malloc 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.453 true 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.453 [2024-11-27 21:41:13.323316] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:50.453 [2024-11-27 21:41:13.323356] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:50.453 [2024-11-27 21:41:13.323391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:08:50.453 [2024-11-27 21:41:13.323398] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:50.453 [2024-11-27 21:41:13.325459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:50.453 [2024-11-27 21:41:13.325494] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:50.453 BaseBdev3 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.453 [2024-11-27 21:41:13.335349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:50.453 [2024-11-27 21:41:13.337209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.453 [2024-11-27 21:41:13.337287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.453 [2024-11-27 21:41:13.337451] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:08:50.453 [2024-11-27 21:41:13.337464] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:50.453 [2024-11-27 21:41:13.337703] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002bb0 00:08:50.453 [2024-11-27 21:41:13.337874] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:08:50.453 [2024-11-27 21:41:13.337886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:08:50.453 [2024-11-27 21:41:13.338024] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.453 "name": "raid_bdev1", 00:08:50.453 "uuid": "709a6a3b-04bc-4db9-9f88-f399f31d9881", 00:08:50.453 "strip_size_kb": 0, 00:08:50.453 "state": "online", 00:08:50.453 "raid_level": "raid1", 00:08:50.453 "superblock": true, 00:08:50.453 "num_base_bdevs": 3, 00:08:50.453 "num_base_bdevs_discovered": 3, 00:08:50.453 "num_base_bdevs_operational": 3, 00:08:50.453 "base_bdevs_list": [ 00:08:50.453 { 00:08:50.453 "name": "BaseBdev1", 00:08:50.453 "uuid": "621e5f20-0e6d-5c37-be0b-ff2165a638e0", 00:08:50.453 "is_configured": true, 00:08:50.453 "data_offset": 2048, 00:08:50.453 "data_size": 63488 00:08:50.453 }, 00:08:50.453 { 00:08:50.453 "name": "BaseBdev2", 00:08:50.453 "uuid": "df511a5f-092b-5117-a2a1-fe3a53be4f47", 00:08:50.453 "is_configured": true, 00:08:50.453 "data_offset": 2048, 00:08:50.453 "data_size": 63488 00:08:50.453 }, 00:08:50.453 { 00:08:50.453 "name": "BaseBdev3", 00:08:50.453 "uuid": "c0426d56-b884-5750-8630-0b686dfb5862", 00:08:50.453 "is_configured": true, 00:08:50.453 "data_offset": 2048, 00:08:50.453 "data_size": 63488 00:08:50.453 } 00:08:50.453 ] 00:08:50.453 }' 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.453 21:41:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.714 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:50.714 21:41:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:50.974 [2024-11-27 21:41:13.854892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002d50 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 [2024-11-27 21:41:14.770393] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:51.915 [2024-11-27 21:41:14.770508] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:51.915 [2024-11-27 21:41:14.770768] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002d50 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.915 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.915 "name": "raid_bdev1", 00:08:51.915 "uuid": "709a6a3b-04bc-4db9-9f88-f399f31d9881", 00:08:51.915 "strip_size_kb": 0, 00:08:51.915 "state": "online", 00:08:51.915 "raid_level": "raid1", 00:08:51.915 "superblock": true, 00:08:51.915 "num_base_bdevs": 3, 00:08:51.915 "num_base_bdevs_discovered": 2, 00:08:51.915 "num_base_bdevs_operational": 2, 00:08:51.915 "base_bdevs_list": [ 00:08:51.915 { 00:08:51.915 "name": null, 00:08:51.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.915 "is_configured": false, 00:08:51.915 "data_offset": 0, 00:08:51.915 "data_size": 63488 00:08:51.915 }, 00:08:51.915 { 00:08:51.915 "name": "BaseBdev2", 00:08:51.915 "uuid": "df511a5f-092b-5117-a2a1-fe3a53be4f47", 00:08:51.915 "is_configured": true, 00:08:51.915 "data_offset": 2048, 00:08:51.915 "data_size": 63488 00:08:51.915 }, 00:08:51.915 { 00:08:51.916 "name": "BaseBdev3", 00:08:51.916 "uuid": "c0426d56-b884-5750-8630-0b686dfb5862", 00:08:51.916 "is_configured": true, 00:08:51.916 "data_offset": 2048, 00:08:51.916 "data_size": 63488 00:08:51.916 } 00:08:51.916 ] 00:08:51.916 }' 00:08:51.916 21:41:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.916 21:41:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.239 [2024-11-27 21:41:15.224476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:52.239 [2024-11-27 21:41:15.224507] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.239 [2024-11-27 21:41:15.226975] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.239 [2024-11-27 21:41:15.227035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.239 [2024-11-27 21:41:15.227117] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.239 [2024-11-27 21:41:15.227127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:08:52.239 { 00:08:52.239 "results": [ 00:08:52.239 { 00:08:52.239 "job": "raid_bdev1", 00:08:52.239 "core_mask": "0x1", 00:08:52.239 "workload": "randrw", 00:08:52.239 "percentage": 50, 00:08:52.239 "status": "finished", 00:08:52.239 "queue_depth": 1, 00:08:52.239 "io_size": 131072, 00:08:52.239 "runtime": 1.370378, 00:08:52.239 "iops": 16318.85508961761, 00:08:52.239 "mibps": 2039.8568862022012, 00:08:52.239 "io_failed": 0, 00:08:52.239 "io_timeout": 0, 00:08:52.239 "avg_latency_us": 58.67590692439379, 00:08:52.239 "min_latency_us": 22.69344978165939, 00:08:52.239 "max_latency_us": 1352.216593886463 00:08:52.239 } 00:08:52.239 ], 00:08:52.239 "core_count": 1 00:08:52.239 } 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 79947 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 79947 ']' 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 79947 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79947 00:08:52.239 killing process with pid 79947 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79947' 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 79947 00:08:52.239 [2024-11-27 21:41:15.270764] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.239 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 79947 00:08:52.239 [2024-11-27 21:41:15.296591] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:52.521 21:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CLcspT8obf 00:08:52.521 21:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:52.521 21:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:52.521 21:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:52.521 21:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:52.521 ************************************ 00:08:52.521 END TEST raid_write_error_test 00:08:52.521 ************************************ 00:08:52.521 21:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:52.521 21:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:52.521 21:41:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:52.521 00:08:52.521 real 0m3.218s 00:08:52.521 user 0m4.074s 00:08:52.521 sys 0m0.502s 00:08:52.521 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.521 21:41:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.521 21:41:15 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:52.521 21:41:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:52.521 21:41:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:08:52.521 21:41:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:52.521 21:41:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.521 21:41:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:52.521 ************************************ 00:08:52.521 START TEST raid_state_function_test 00:08:52.521 ************************************ 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80074 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80074' 00:08:52.521 Process raid pid: 80074 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80074 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80074 ']' 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.521 21:41:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.781 [2024-11-27 21:41:15.662110] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:08:52.781 [2024-11-27 21:41:15.662278] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.781 [2024-11-27 21:41:15.816351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.781 [2024-11-27 21:41:15.842014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.781 [2024-11-27 21:41:15.884859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.781 [2024-11-27 21:41:15.884938] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.721 [2024-11-27 21:41:16.495353] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.721 [2024-11-27 21:41:16.495411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.721 [2024-11-27 21:41:16.495428] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.721 [2024-11-27 21:41:16.495439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.721 [2024-11-27 21:41:16.495446] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.721 [2024-11-27 21:41:16.495458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.721 [2024-11-27 21:41:16.495464] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:53.721 [2024-11-27 21:41:16.495472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.721 "name": "Existed_Raid", 00:08:53.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.721 "strip_size_kb": 64, 00:08:53.721 "state": "configuring", 00:08:53.721 "raid_level": "raid0", 00:08:53.721 "superblock": false, 00:08:53.721 "num_base_bdevs": 4, 00:08:53.721 "num_base_bdevs_discovered": 0, 00:08:53.721 "num_base_bdevs_operational": 4, 00:08:53.721 "base_bdevs_list": [ 00:08:53.721 { 00:08:53.721 "name": "BaseBdev1", 00:08:53.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.721 "is_configured": false, 00:08:53.721 "data_offset": 0, 00:08:53.721 "data_size": 0 00:08:53.721 }, 00:08:53.721 { 00:08:53.721 "name": "BaseBdev2", 00:08:53.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.721 "is_configured": false, 00:08:53.721 "data_offset": 0, 00:08:53.721 "data_size": 0 00:08:53.721 }, 00:08:53.721 { 00:08:53.721 "name": "BaseBdev3", 00:08:53.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.721 "is_configured": false, 00:08:53.721 "data_offset": 0, 00:08:53.721 "data_size": 0 00:08:53.721 }, 00:08:53.721 { 00:08:53.721 "name": "BaseBdev4", 00:08:53.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.721 "is_configured": false, 00:08:53.721 "data_offset": 0, 00:08:53.721 "data_size": 0 00:08:53.721 } 00:08:53.721 ] 00:08:53.721 }' 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.721 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.981 [2024-11-27 21:41:16.926536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:53.981 [2024-11-27 21:41:16.926613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.981 [2024-11-27 21:41:16.938531] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:53.981 [2024-11-27 21:41:16.938622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:53.981 [2024-11-27 21:41:16.938650] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:53.981 [2024-11-27 21:41:16.938672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:53.981 [2024-11-27 21:41:16.938690] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:53.981 [2024-11-27 21:41:16.938711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:53.981 [2024-11-27 21:41:16.938728] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:53.981 [2024-11-27 21:41:16.938784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.981 [2024-11-27 21:41:16.959567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:53.981 BaseBdev1 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.981 [ 00:08:53.981 { 00:08:53.981 "name": "BaseBdev1", 00:08:53.981 "aliases": [ 00:08:53.981 "fa3a8335-acc0-43c8-a522-c00f15d10e3a" 00:08:53.981 ], 00:08:53.981 "product_name": "Malloc disk", 00:08:53.981 "block_size": 512, 00:08:53.981 "num_blocks": 65536, 00:08:53.981 "uuid": "fa3a8335-acc0-43c8-a522-c00f15d10e3a", 00:08:53.981 "assigned_rate_limits": { 00:08:53.981 "rw_ios_per_sec": 0, 00:08:53.981 "rw_mbytes_per_sec": 0, 00:08:53.981 "r_mbytes_per_sec": 0, 00:08:53.981 "w_mbytes_per_sec": 0 00:08:53.981 }, 00:08:53.981 "claimed": true, 00:08:53.981 "claim_type": "exclusive_write", 00:08:53.981 "zoned": false, 00:08:53.981 "supported_io_types": { 00:08:53.981 "read": true, 00:08:53.981 "write": true, 00:08:53.981 "unmap": true, 00:08:53.981 "flush": true, 00:08:53.981 "reset": true, 00:08:53.981 "nvme_admin": false, 00:08:53.981 "nvme_io": false, 00:08:53.981 "nvme_io_md": false, 00:08:53.981 "write_zeroes": true, 00:08:53.981 "zcopy": true, 00:08:53.981 "get_zone_info": false, 00:08:53.981 "zone_management": false, 00:08:53.981 "zone_append": false, 00:08:53.981 "compare": false, 00:08:53.981 "compare_and_write": false, 00:08:53.981 "abort": true, 00:08:53.981 "seek_hole": false, 00:08:53.981 "seek_data": false, 00:08:53.981 "copy": true, 00:08:53.981 "nvme_iov_md": false 00:08:53.981 }, 00:08:53.981 "memory_domains": [ 00:08:53.981 { 00:08:53.981 "dma_device_id": "system", 00:08:53.981 "dma_device_type": 1 00:08:53.981 }, 00:08:53.981 { 00:08:53.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.981 "dma_device_type": 2 00:08:53.981 } 00:08:53.981 ], 00:08:53.981 "driver_specific": {} 00:08:53.981 } 00:08:53.981 ] 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.981 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:53.982 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:53.982 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.982 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.982 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.982 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.982 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:53.982 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.982 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.982 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.982 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.982 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.982 21:41:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.982 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.982 21:41:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.982 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.982 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.982 "name": "Existed_Raid", 00:08:53.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.982 "strip_size_kb": 64, 00:08:53.982 "state": "configuring", 00:08:53.982 "raid_level": "raid0", 00:08:53.982 "superblock": false, 00:08:53.982 "num_base_bdevs": 4, 00:08:53.982 "num_base_bdevs_discovered": 1, 00:08:53.982 "num_base_bdevs_operational": 4, 00:08:53.982 "base_bdevs_list": [ 00:08:53.982 { 00:08:53.982 "name": "BaseBdev1", 00:08:53.982 "uuid": "fa3a8335-acc0-43c8-a522-c00f15d10e3a", 00:08:53.982 "is_configured": true, 00:08:53.982 "data_offset": 0, 00:08:53.982 "data_size": 65536 00:08:53.982 }, 00:08:53.982 { 00:08:53.982 "name": "BaseBdev2", 00:08:53.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.982 "is_configured": false, 00:08:53.982 "data_offset": 0, 00:08:53.982 "data_size": 0 00:08:53.982 }, 00:08:53.982 { 00:08:53.982 "name": "BaseBdev3", 00:08:53.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.982 "is_configured": false, 00:08:53.982 "data_offset": 0, 00:08:53.982 "data_size": 0 00:08:53.982 }, 00:08:53.982 { 00:08:53.982 "name": "BaseBdev4", 00:08:53.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.982 "is_configured": false, 00:08:53.982 "data_offset": 0, 00:08:53.982 "data_size": 0 00:08:53.982 } 00:08:53.982 ] 00:08:53.982 }' 00:08:53.982 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.982 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.549 [2024-11-27 21:41:17.410865] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.549 [2024-11-27 21:41:17.410972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.549 [2024-11-27 21:41:17.418876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.549 [2024-11-27 21:41:17.420703] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.549 [2024-11-27 21:41:17.420740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.549 [2024-11-27 21:41:17.420748] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:54.549 [2024-11-27 21:41:17.420773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:54.549 [2024-11-27 21:41:17.420780] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:08:54.549 [2024-11-27 21:41:17.420789] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.549 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.550 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.550 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.550 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.550 "name": "Existed_Raid", 00:08:54.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.550 "strip_size_kb": 64, 00:08:54.550 "state": "configuring", 00:08:54.550 "raid_level": "raid0", 00:08:54.550 "superblock": false, 00:08:54.550 "num_base_bdevs": 4, 00:08:54.550 "num_base_bdevs_discovered": 1, 00:08:54.550 "num_base_bdevs_operational": 4, 00:08:54.550 "base_bdevs_list": [ 00:08:54.550 { 00:08:54.550 "name": "BaseBdev1", 00:08:54.550 "uuid": "fa3a8335-acc0-43c8-a522-c00f15d10e3a", 00:08:54.550 "is_configured": true, 00:08:54.550 "data_offset": 0, 00:08:54.550 "data_size": 65536 00:08:54.550 }, 00:08:54.550 { 00:08:54.550 "name": "BaseBdev2", 00:08:54.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.550 "is_configured": false, 00:08:54.550 "data_offset": 0, 00:08:54.550 "data_size": 0 00:08:54.550 }, 00:08:54.550 { 00:08:54.550 "name": "BaseBdev3", 00:08:54.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.550 "is_configured": false, 00:08:54.550 "data_offset": 0, 00:08:54.550 "data_size": 0 00:08:54.550 }, 00:08:54.550 { 00:08:54.550 "name": "BaseBdev4", 00:08:54.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.550 "is_configured": false, 00:08:54.550 "data_offset": 0, 00:08:54.550 "data_size": 0 00:08:54.550 } 00:08:54.550 ] 00:08:54.550 }' 00:08:54.550 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.550 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.810 [2024-11-27 21:41:17.856929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:54.810 BaseBdev2 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.810 [ 00:08:54.810 { 00:08:54.810 "name": "BaseBdev2", 00:08:54.810 "aliases": [ 00:08:54.810 "3f3e1bdb-67bb-447e-9cba-394f6e88e6c2" 00:08:54.810 ], 00:08:54.810 "product_name": "Malloc disk", 00:08:54.810 "block_size": 512, 00:08:54.810 "num_blocks": 65536, 00:08:54.810 "uuid": "3f3e1bdb-67bb-447e-9cba-394f6e88e6c2", 00:08:54.810 "assigned_rate_limits": { 00:08:54.810 "rw_ios_per_sec": 0, 00:08:54.810 "rw_mbytes_per_sec": 0, 00:08:54.810 "r_mbytes_per_sec": 0, 00:08:54.810 "w_mbytes_per_sec": 0 00:08:54.810 }, 00:08:54.810 "claimed": true, 00:08:54.810 "claim_type": "exclusive_write", 00:08:54.810 "zoned": false, 00:08:54.810 "supported_io_types": { 00:08:54.810 "read": true, 00:08:54.810 "write": true, 00:08:54.810 "unmap": true, 00:08:54.810 "flush": true, 00:08:54.810 "reset": true, 00:08:54.810 "nvme_admin": false, 00:08:54.810 "nvme_io": false, 00:08:54.810 "nvme_io_md": false, 00:08:54.810 "write_zeroes": true, 00:08:54.810 "zcopy": true, 00:08:54.810 "get_zone_info": false, 00:08:54.810 "zone_management": false, 00:08:54.810 "zone_append": false, 00:08:54.810 "compare": false, 00:08:54.810 "compare_and_write": false, 00:08:54.810 "abort": true, 00:08:54.810 "seek_hole": false, 00:08:54.810 "seek_data": false, 00:08:54.810 "copy": true, 00:08:54.810 "nvme_iov_md": false 00:08:54.810 }, 00:08:54.810 "memory_domains": [ 00:08:54.810 { 00:08:54.810 "dma_device_id": "system", 00:08:54.810 "dma_device_type": 1 00:08:54.810 }, 00:08:54.810 { 00:08:54.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.810 "dma_device_type": 2 00:08:54.810 } 00:08:54.810 ], 00:08:54.810 "driver_specific": {} 00:08:54.810 } 00:08:54.810 ] 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.810 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.071 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.071 "name": "Existed_Raid", 00:08:55.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.071 "strip_size_kb": 64, 00:08:55.071 "state": "configuring", 00:08:55.071 "raid_level": "raid0", 00:08:55.071 "superblock": false, 00:08:55.071 "num_base_bdevs": 4, 00:08:55.071 "num_base_bdevs_discovered": 2, 00:08:55.071 "num_base_bdevs_operational": 4, 00:08:55.071 "base_bdevs_list": [ 00:08:55.071 { 00:08:55.071 "name": "BaseBdev1", 00:08:55.071 "uuid": "fa3a8335-acc0-43c8-a522-c00f15d10e3a", 00:08:55.071 "is_configured": true, 00:08:55.071 "data_offset": 0, 00:08:55.071 "data_size": 65536 00:08:55.071 }, 00:08:55.071 { 00:08:55.071 "name": "BaseBdev2", 00:08:55.071 "uuid": "3f3e1bdb-67bb-447e-9cba-394f6e88e6c2", 00:08:55.071 "is_configured": true, 00:08:55.071 "data_offset": 0, 00:08:55.071 "data_size": 65536 00:08:55.071 }, 00:08:55.071 { 00:08:55.071 "name": "BaseBdev3", 00:08:55.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.071 "is_configured": false, 00:08:55.071 "data_offset": 0, 00:08:55.071 "data_size": 0 00:08:55.071 }, 00:08:55.071 { 00:08:55.071 "name": "BaseBdev4", 00:08:55.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.071 "is_configured": false, 00:08:55.071 "data_offset": 0, 00:08:55.071 "data_size": 0 00:08:55.071 } 00:08:55.071 ] 00:08:55.071 }' 00:08:55.071 21:41:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.071 21:41:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.331 [2024-11-27 21:41:18.315628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:55.331 BaseBdev3 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.331 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.331 [ 00:08:55.331 { 00:08:55.331 "name": "BaseBdev3", 00:08:55.331 "aliases": [ 00:08:55.331 "6da78a17-49f8-49a4-8898-fb643da4952b" 00:08:55.331 ], 00:08:55.331 "product_name": "Malloc disk", 00:08:55.332 "block_size": 512, 00:08:55.332 "num_blocks": 65536, 00:08:55.332 "uuid": "6da78a17-49f8-49a4-8898-fb643da4952b", 00:08:55.332 "assigned_rate_limits": { 00:08:55.332 "rw_ios_per_sec": 0, 00:08:55.332 "rw_mbytes_per_sec": 0, 00:08:55.332 "r_mbytes_per_sec": 0, 00:08:55.332 "w_mbytes_per_sec": 0 00:08:55.332 }, 00:08:55.332 "claimed": true, 00:08:55.332 "claim_type": "exclusive_write", 00:08:55.332 "zoned": false, 00:08:55.332 "supported_io_types": { 00:08:55.332 "read": true, 00:08:55.332 "write": true, 00:08:55.332 "unmap": true, 00:08:55.332 "flush": true, 00:08:55.332 "reset": true, 00:08:55.332 "nvme_admin": false, 00:08:55.332 "nvme_io": false, 00:08:55.332 "nvme_io_md": false, 00:08:55.332 "write_zeroes": true, 00:08:55.332 "zcopy": true, 00:08:55.332 "get_zone_info": false, 00:08:55.332 "zone_management": false, 00:08:55.332 "zone_append": false, 00:08:55.332 "compare": false, 00:08:55.332 "compare_and_write": false, 00:08:55.332 "abort": true, 00:08:55.332 "seek_hole": false, 00:08:55.332 "seek_data": false, 00:08:55.332 "copy": true, 00:08:55.332 "nvme_iov_md": false 00:08:55.332 }, 00:08:55.332 "memory_domains": [ 00:08:55.332 { 00:08:55.332 "dma_device_id": "system", 00:08:55.332 "dma_device_type": 1 00:08:55.332 }, 00:08:55.332 { 00:08:55.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.332 "dma_device_type": 2 00:08:55.332 } 00:08:55.332 ], 00:08:55.332 "driver_specific": {} 00:08:55.332 } 00:08:55.332 ] 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.332 "name": "Existed_Raid", 00:08:55.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.332 "strip_size_kb": 64, 00:08:55.332 "state": "configuring", 00:08:55.332 "raid_level": "raid0", 00:08:55.332 "superblock": false, 00:08:55.332 "num_base_bdevs": 4, 00:08:55.332 "num_base_bdevs_discovered": 3, 00:08:55.332 "num_base_bdevs_operational": 4, 00:08:55.332 "base_bdevs_list": [ 00:08:55.332 { 00:08:55.332 "name": "BaseBdev1", 00:08:55.332 "uuid": "fa3a8335-acc0-43c8-a522-c00f15d10e3a", 00:08:55.332 "is_configured": true, 00:08:55.332 "data_offset": 0, 00:08:55.332 "data_size": 65536 00:08:55.332 }, 00:08:55.332 { 00:08:55.332 "name": "BaseBdev2", 00:08:55.332 "uuid": "3f3e1bdb-67bb-447e-9cba-394f6e88e6c2", 00:08:55.332 "is_configured": true, 00:08:55.332 "data_offset": 0, 00:08:55.332 "data_size": 65536 00:08:55.332 }, 00:08:55.332 { 00:08:55.332 "name": "BaseBdev3", 00:08:55.332 "uuid": "6da78a17-49f8-49a4-8898-fb643da4952b", 00:08:55.332 "is_configured": true, 00:08:55.332 "data_offset": 0, 00:08:55.332 "data_size": 65536 00:08:55.332 }, 00:08:55.332 { 00:08:55.332 "name": "BaseBdev4", 00:08:55.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.332 "is_configured": false, 00:08:55.332 "data_offset": 0, 00:08:55.332 "data_size": 0 00:08:55.332 } 00:08:55.332 ] 00:08:55.332 }' 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.332 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.902 [2024-11-27 21:41:18.753793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:55.902 [2024-11-27 21:41:18.753916] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:08:55.902 [2024-11-27 21:41:18.753943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:08:55.902 [2024-11-27 21:41:18.754277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:08:55.902 [2024-11-27 21:41:18.754481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:08:55.902 [2024-11-27 21:41:18.754526] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:08:55.902 [2024-11-27 21:41:18.754783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.902 BaseBdev4 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.902 [ 00:08:55.902 { 00:08:55.902 "name": "BaseBdev4", 00:08:55.902 "aliases": [ 00:08:55.902 "b3042155-c76d-4a47-81f7-547508390944" 00:08:55.902 ], 00:08:55.902 "product_name": "Malloc disk", 00:08:55.902 "block_size": 512, 00:08:55.902 "num_blocks": 65536, 00:08:55.902 "uuid": "b3042155-c76d-4a47-81f7-547508390944", 00:08:55.902 "assigned_rate_limits": { 00:08:55.902 "rw_ios_per_sec": 0, 00:08:55.902 "rw_mbytes_per_sec": 0, 00:08:55.902 "r_mbytes_per_sec": 0, 00:08:55.902 "w_mbytes_per_sec": 0 00:08:55.902 }, 00:08:55.902 "claimed": true, 00:08:55.902 "claim_type": "exclusive_write", 00:08:55.902 "zoned": false, 00:08:55.902 "supported_io_types": { 00:08:55.902 "read": true, 00:08:55.902 "write": true, 00:08:55.902 "unmap": true, 00:08:55.902 "flush": true, 00:08:55.902 "reset": true, 00:08:55.902 "nvme_admin": false, 00:08:55.902 "nvme_io": false, 00:08:55.902 "nvme_io_md": false, 00:08:55.902 "write_zeroes": true, 00:08:55.902 "zcopy": true, 00:08:55.902 "get_zone_info": false, 00:08:55.902 "zone_management": false, 00:08:55.902 "zone_append": false, 00:08:55.902 "compare": false, 00:08:55.902 "compare_and_write": false, 00:08:55.902 "abort": true, 00:08:55.902 "seek_hole": false, 00:08:55.902 "seek_data": false, 00:08:55.902 "copy": true, 00:08:55.902 "nvme_iov_md": false 00:08:55.902 }, 00:08:55.902 "memory_domains": [ 00:08:55.902 { 00:08:55.902 "dma_device_id": "system", 00:08:55.902 "dma_device_type": 1 00:08:55.902 }, 00:08:55.902 { 00:08:55.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.902 "dma_device_type": 2 00:08:55.902 } 00:08:55.902 ], 00:08:55.902 "driver_specific": {} 00:08:55.902 } 00:08:55.902 ] 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.902 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.902 "name": "Existed_Raid", 00:08:55.902 "uuid": "81a30119-1519-4717-bf99-6b70ddef3af3", 00:08:55.902 "strip_size_kb": 64, 00:08:55.902 "state": "online", 00:08:55.902 "raid_level": "raid0", 00:08:55.902 "superblock": false, 00:08:55.902 "num_base_bdevs": 4, 00:08:55.902 "num_base_bdevs_discovered": 4, 00:08:55.902 "num_base_bdevs_operational": 4, 00:08:55.902 "base_bdevs_list": [ 00:08:55.902 { 00:08:55.902 "name": "BaseBdev1", 00:08:55.902 "uuid": "fa3a8335-acc0-43c8-a522-c00f15d10e3a", 00:08:55.902 "is_configured": true, 00:08:55.902 "data_offset": 0, 00:08:55.902 "data_size": 65536 00:08:55.902 }, 00:08:55.902 { 00:08:55.902 "name": "BaseBdev2", 00:08:55.902 "uuid": "3f3e1bdb-67bb-447e-9cba-394f6e88e6c2", 00:08:55.902 "is_configured": true, 00:08:55.902 "data_offset": 0, 00:08:55.902 "data_size": 65536 00:08:55.902 }, 00:08:55.902 { 00:08:55.902 "name": "BaseBdev3", 00:08:55.902 "uuid": "6da78a17-49f8-49a4-8898-fb643da4952b", 00:08:55.902 "is_configured": true, 00:08:55.902 "data_offset": 0, 00:08:55.902 "data_size": 65536 00:08:55.902 }, 00:08:55.902 { 00:08:55.902 "name": "BaseBdev4", 00:08:55.902 "uuid": "b3042155-c76d-4a47-81f7-547508390944", 00:08:55.902 "is_configured": true, 00:08:55.903 "data_offset": 0, 00:08:55.903 "data_size": 65536 00:08:55.903 } 00:08:55.903 ] 00:08:55.903 }' 00:08:55.903 21:41:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.903 21:41:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.163 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:56.163 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:56.163 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:56.163 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:56.163 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:56.164 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:56.164 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:56.164 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:56.164 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.164 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.164 [2024-11-27 21:41:19.209408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.164 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.164 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.164 "name": "Existed_Raid", 00:08:56.164 "aliases": [ 00:08:56.164 "81a30119-1519-4717-bf99-6b70ddef3af3" 00:08:56.164 ], 00:08:56.164 "product_name": "Raid Volume", 00:08:56.164 "block_size": 512, 00:08:56.164 "num_blocks": 262144, 00:08:56.164 "uuid": "81a30119-1519-4717-bf99-6b70ddef3af3", 00:08:56.164 "assigned_rate_limits": { 00:08:56.164 "rw_ios_per_sec": 0, 00:08:56.164 "rw_mbytes_per_sec": 0, 00:08:56.164 "r_mbytes_per_sec": 0, 00:08:56.164 "w_mbytes_per_sec": 0 00:08:56.164 }, 00:08:56.164 "claimed": false, 00:08:56.164 "zoned": false, 00:08:56.164 "supported_io_types": { 00:08:56.164 "read": true, 00:08:56.164 "write": true, 00:08:56.164 "unmap": true, 00:08:56.164 "flush": true, 00:08:56.164 "reset": true, 00:08:56.164 "nvme_admin": false, 00:08:56.164 "nvme_io": false, 00:08:56.164 "nvme_io_md": false, 00:08:56.164 "write_zeroes": true, 00:08:56.164 "zcopy": false, 00:08:56.164 "get_zone_info": false, 00:08:56.164 "zone_management": false, 00:08:56.164 "zone_append": false, 00:08:56.164 "compare": false, 00:08:56.164 "compare_and_write": false, 00:08:56.164 "abort": false, 00:08:56.164 "seek_hole": false, 00:08:56.164 "seek_data": false, 00:08:56.164 "copy": false, 00:08:56.164 "nvme_iov_md": false 00:08:56.164 }, 00:08:56.164 "memory_domains": [ 00:08:56.164 { 00:08:56.164 "dma_device_id": "system", 00:08:56.164 "dma_device_type": 1 00:08:56.164 }, 00:08:56.164 { 00:08:56.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.164 "dma_device_type": 2 00:08:56.164 }, 00:08:56.164 { 00:08:56.164 "dma_device_id": "system", 00:08:56.164 "dma_device_type": 1 00:08:56.164 }, 00:08:56.164 { 00:08:56.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.165 "dma_device_type": 2 00:08:56.165 }, 00:08:56.165 { 00:08:56.165 "dma_device_id": "system", 00:08:56.165 "dma_device_type": 1 00:08:56.165 }, 00:08:56.165 { 00:08:56.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.165 "dma_device_type": 2 00:08:56.165 }, 00:08:56.165 { 00:08:56.165 "dma_device_id": "system", 00:08:56.165 "dma_device_type": 1 00:08:56.165 }, 00:08:56.165 { 00:08:56.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.165 "dma_device_type": 2 00:08:56.165 } 00:08:56.165 ], 00:08:56.165 "driver_specific": { 00:08:56.165 "raid": { 00:08:56.165 "uuid": "81a30119-1519-4717-bf99-6b70ddef3af3", 00:08:56.165 "strip_size_kb": 64, 00:08:56.165 "state": "online", 00:08:56.165 "raid_level": "raid0", 00:08:56.165 "superblock": false, 00:08:56.165 "num_base_bdevs": 4, 00:08:56.165 "num_base_bdevs_discovered": 4, 00:08:56.165 "num_base_bdevs_operational": 4, 00:08:56.165 "base_bdevs_list": [ 00:08:56.165 { 00:08:56.165 "name": "BaseBdev1", 00:08:56.165 "uuid": "fa3a8335-acc0-43c8-a522-c00f15d10e3a", 00:08:56.165 "is_configured": true, 00:08:56.165 "data_offset": 0, 00:08:56.165 "data_size": 65536 00:08:56.165 }, 00:08:56.165 { 00:08:56.165 "name": "BaseBdev2", 00:08:56.165 "uuid": "3f3e1bdb-67bb-447e-9cba-394f6e88e6c2", 00:08:56.165 "is_configured": true, 00:08:56.165 "data_offset": 0, 00:08:56.165 "data_size": 65536 00:08:56.165 }, 00:08:56.165 { 00:08:56.165 "name": "BaseBdev3", 00:08:56.165 "uuid": "6da78a17-49f8-49a4-8898-fb643da4952b", 00:08:56.165 "is_configured": true, 00:08:56.165 "data_offset": 0, 00:08:56.165 "data_size": 65536 00:08:56.165 }, 00:08:56.165 { 00:08:56.165 "name": "BaseBdev4", 00:08:56.165 "uuid": "b3042155-c76d-4a47-81f7-547508390944", 00:08:56.165 "is_configured": true, 00:08:56.165 "data_offset": 0, 00:08:56.165 "data_size": 65536 00:08:56.165 } 00:08:56.165 ] 00:08:56.165 } 00:08:56.165 } 00:08:56.165 }' 00:08:56.165 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.426 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:56.426 BaseBdev2 00:08:56.426 BaseBdev3 00:08:56.426 BaseBdev4' 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.427 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.685 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.685 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.685 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:56.685 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.685 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.685 [2024-11-27 21:41:19.560519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:56.685 [2024-11-27 21:41:19.560551] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.685 [2024-11-27 21:41:19.560612] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.685 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.685 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:56.685 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:56.685 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.685 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:56.685 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:56.685 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:08:56.685 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.685 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.686 "name": "Existed_Raid", 00:08:56.686 "uuid": "81a30119-1519-4717-bf99-6b70ddef3af3", 00:08:56.686 "strip_size_kb": 64, 00:08:56.686 "state": "offline", 00:08:56.686 "raid_level": "raid0", 00:08:56.686 "superblock": false, 00:08:56.686 "num_base_bdevs": 4, 00:08:56.686 "num_base_bdevs_discovered": 3, 00:08:56.686 "num_base_bdevs_operational": 3, 00:08:56.686 "base_bdevs_list": [ 00:08:56.686 { 00:08:56.686 "name": null, 00:08:56.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.686 "is_configured": false, 00:08:56.686 "data_offset": 0, 00:08:56.686 "data_size": 65536 00:08:56.686 }, 00:08:56.686 { 00:08:56.686 "name": "BaseBdev2", 00:08:56.686 "uuid": "3f3e1bdb-67bb-447e-9cba-394f6e88e6c2", 00:08:56.686 "is_configured": true, 00:08:56.686 "data_offset": 0, 00:08:56.686 "data_size": 65536 00:08:56.686 }, 00:08:56.686 { 00:08:56.686 "name": "BaseBdev3", 00:08:56.686 "uuid": "6da78a17-49f8-49a4-8898-fb643da4952b", 00:08:56.686 "is_configured": true, 00:08:56.686 "data_offset": 0, 00:08:56.686 "data_size": 65536 00:08:56.686 }, 00:08:56.686 { 00:08:56.686 "name": "BaseBdev4", 00:08:56.686 "uuid": "b3042155-c76d-4a47-81f7-547508390944", 00:08:56.686 "is_configured": true, 00:08:56.686 "data_offset": 0, 00:08:56.686 "data_size": 65536 00:08:56.686 } 00:08:56.686 ] 00:08:56.686 }' 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.686 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.945 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:56.945 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:56.945 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:56.945 21:41:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.945 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.945 21:41:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.945 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.945 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:56.945 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:56.945 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:56.945 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.945 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.945 [2024-11-27 21:41:20.054880] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.205 [2024-11-27 21:41:20.126039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.205 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.206 [2024-11-27 21:41:20.197119] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:08:57.206 [2024-11-27 21:41:20.197206] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.206 BaseBdev2 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.206 [ 00:08:57.206 { 00:08:57.206 "name": "BaseBdev2", 00:08:57.206 "aliases": [ 00:08:57.206 "e3cf1d02-6103-4ce2-9ceb-af23efda6549" 00:08:57.206 ], 00:08:57.206 "product_name": "Malloc disk", 00:08:57.206 "block_size": 512, 00:08:57.206 "num_blocks": 65536, 00:08:57.206 "uuid": "e3cf1d02-6103-4ce2-9ceb-af23efda6549", 00:08:57.206 "assigned_rate_limits": { 00:08:57.206 "rw_ios_per_sec": 0, 00:08:57.206 "rw_mbytes_per_sec": 0, 00:08:57.206 "r_mbytes_per_sec": 0, 00:08:57.206 "w_mbytes_per_sec": 0 00:08:57.206 }, 00:08:57.206 "claimed": false, 00:08:57.206 "zoned": false, 00:08:57.206 "supported_io_types": { 00:08:57.206 "read": true, 00:08:57.206 "write": true, 00:08:57.206 "unmap": true, 00:08:57.206 "flush": true, 00:08:57.206 "reset": true, 00:08:57.206 "nvme_admin": false, 00:08:57.206 "nvme_io": false, 00:08:57.206 "nvme_io_md": false, 00:08:57.206 "write_zeroes": true, 00:08:57.206 "zcopy": true, 00:08:57.206 "get_zone_info": false, 00:08:57.206 "zone_management": false, 00:08:57.206 "zone_append": false, 00:08:57.206 "compare": false, 00:08:57.206 "compare_and_write": false, 00:08:57.206 "abort": true, 00:08:57.206 "seek_hole": false, 00:08:57.206 "seek_data": false, 00:08:57.206 "copy": true, 00:08:57.206 "nvme_iov_md": false 00:08:57.206 }, 00:08:57.206 "memory_domains": [ 00:08:57.206 { 00:08:57.206 "dma_device_id": "system", 00:08:57.206 "dma_device_type": 1 00:08:57.206 }, 00:08:57.206 { 00:08:57.206 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.206 "dma_device_type": 2 00:08:57.206 } 00:08:57.206 ], 00:08:57.206 "driver_specific": {} 00:08:57.206 } 00:08:57.206 ] 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.206 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.466 BaseBdev3 00:08:57.466 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.466 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:57.466 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:57.466 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.466 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.466 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.466 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.466 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.466 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.466 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.466 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.466 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:57.466 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.466 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.466 [ 00:08:57.466 { 00:08:57.466 "name": "BaseBdev3", 00:08:57.466 "aliases": [ 00:08:57.467 "bfa79f8f-ba5e-4704-b358-e6ef50c6e45a" 00:08:57.467 ], 00:08:57.467 "product_name": "Malloc disk", 00:08:57.467 "block_size": 512, 00:08:57.467 "num_blocks": 65536, 00:08:57.467 "uuid": "bfa79f8f-ba5e-4704-b358-e6ef50c6e45a", 00:08:57.467 "assigned_rate_limits": { 00:08:57.467 "rw_ios_per_sec": 0, 00:08:57.467 "rw_mbytes_per_sec": 0, 00:08:57.467 "r_mbytes_per_sec": 0, 00:08:57.467 "w_mbytes_per_sec": 0 00:08:57.467 }, 00:08:57.467 "claimed": false, 00:08:57.467 "zoned": false, 00:08:57.467 "supported_io_types": { 00:08:57.467 "read": true, 00:08:57.467 "write": true, 00:08:57.467 "unmap": true, 00:08:57.467 "flush": true, 00:08:57.467 "reset": true, 00:08:57.467 "nvme_admin": false, 00:08:57.467 "nvme_io": false, 00:08:57.467 "nvme_io_md": false, 00:08:57.467 "write_zeroes": true, 00:08:57.467 "zcopy": true, 00:08:57.467 "get_zone_info": false, 00:08:57.467 "zone_management": false, 00:08:57.467 "zone_append": false, 00:08:57.467 "compare": false, 00:08:57.467 "compare_and_write": false, 00:08:57.467 "abort": true, 00:08:57.467 "seek_hole": false, 00:08:57.467 "seek_data": false, 00:08:57.467 "copy": true, 00:08:57.467 "nvme_iov_md": false 00:08:57.467 }, 00:08:57.467 "memory_domains": [ 00:08:57.467 { 00:08:57.467 "dma_device_id": "system", 00:08:57.467 "dma_device_type": 1 00:08:57.467 }, 00:08:57.467 { 00:08:57.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.467 "dma_device_type": 2 00:08:57.467 } 00:08:57.467 ], 00:08:57.467 "driver_specific": {} 00:08:57.467 } 00:08:57.467 ] 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.467 BaseBdev4 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.467 [ 00:08:57.467 { 00:08:57.467 "name": "BaseBdev4", 00:08:57.467 "aliases": [ 00:08:57.467 "9530f800-c574-4f11-bd9b-45f89ec55a6f" 00:08:57.467 ], 00:08:57.467 "product_name": "Malloc disk", 00:08:57.467 "block_size": 512, 00:08:57.467 "num_blocks": 65536, 00:08:57.467 "uuid": "9530f800-c574-4f11-bd9b-45f89ec55a6f", 00:08:57.467 "assigned_rate_limits": { 00:08:57.467 "rw_ios_per_sec": 0, 00:08:57.467 "rw_mbytes_per_sec": 0, 00:08:57.467 "r_mbytes_per_sec": 0, 00:08:57.467 "w_mbytes_per_sec": 0 00:08:57.467 }, 00:08:57.467 "claimed": false, 00:08:57.467 "zoned": false, 00:08:57.467 "supported_io_types": { 00:08:57.467 "read": true, 00:08:57.467 "write": true, 00:08:57.467 "unmap": true, 00:08:57.467 "flush": true, 00:08:57.467 "reset": true, 00:08:57.467 "nvme_admin": false, 00:08:57.467 "nvme_io": false, 00:08:57.467 "nvme_io_md": false, 00:08:57.467 "write_zeroes": true, 00:08:57.467 "zcopy": true, 00:08:57.467 "get_zone_info": false, 00:08:57.467 "zone_management": false, 00:08:57.467 "zone_append": false, 00:08:57.467 "compare": false, 00:08:57.467 "compare_and_write": false, 00:08:57.467 "abort": true, 00:08:57.467 "seek_hole": false, 00:08:57.467 "seek_data": false, 00:08:57.467 "copy": true, 00:08:57.467 "nvme_iov_md": false 00:08:57.467 }, 00:08:57.467 "memory_domains": [ 00:08:57.467 { 00:08:57.467 "dma_device_id": "system", 00:08:57.467 "dma_device_type": 1 00:08:57.467 }, 00:08:57.467 { 00:08:57.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.467 "dma_device_type": 2 00:08:57.467 } 00:08:57.467 ], 00:08:57.467 "driver_specific": {} 00:08:57.467 } 00:08:57.467 ] 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.467 [2024-11-27 21:41:20.425239] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.467 [2024-11-27 21:41:20.425329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.467 [2024-11-27 21:41:20.425385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.467 [2024-11-27 21:41:20.427201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:57.467 [2024-11-27 21:41:20.427303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.467 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.467 "name": "Existed_Raid", 00:08:57.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.468 "strip_size_kb": 64, 00:08:57.468 "state": "configuring", 00:08:57.468 "raid_level": "raid0", 00:08:57.468 "superblock": false, 00:08:57.468 "num_base_bdevs": 4, 00:08:57.468 "num_base_bdevs_discovered": 3, 00:08:57.468 "num_base_bdevs_operational": 4, 00:08:57.468 "base_bdevs_list": [ 00:08:57.468 { 00:08:57.468 "name": "BaseBdev1", 00:08:57.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.468 "is_configured": false, 00:08:57.468 "data_offset": 0, 00:08:57.468 "data_size": 0 00:08:57.468 }, 00:08:57.468 { 00:08:57.468 "name": "BaseBdev2", 00:08:57.468 "uuid": "e3cf1d02-6103-4ce2-9ceb-af23efda6549", 00:08:57.468 "is_configured": true, 00:08:57.468 "data_offset": 0, 00:08:57.468 "data_size": 65536 00:08:57.468 }, 00:08:57.468 { 00:08:57.468 "name": "BaseBdev3", 00:08:57.468 "uuid": "bfa79f8f-ba5e-4704-b358-e6ef50c6e45a", 00:08:57.468 "is_configured": true, 00:08:57.468 "data_offset": 0, 00:08:57.468 "data_size": 65536 00:08:57.468 }, 00:08:57.468 { 00:08:57.468 "name": "BaseBdev4", 00:08:57.468 "uuid": "9530f800-c574-4f11-bd9b-45f89ec55a6f", 00:08:57.468 "is_configured": true, 00:08:57.468 "data_offset": 0, 00:08:57.468 "data_size": 65536 00:08:57.468 } 00:08:57.468 ] 00:08:57.468 }' 00:08:57.468 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.468 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.039 [2024-11-27 21:41:20.880496] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.039 "name": "Existed_Raid", 00:08:58.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.039 "strip_size_kb": 64, 00:08:58.039 "state": "configuring", 00:08:58.039 "raid_level": "raid0", 00:08:58.039 "superblock": false, 00:08:58.039 "num_base_bdevs": 4, 00:08:58.039 "num_base_bdevs_discovered": 2, 00:08:58.039 "num_base_bdevs_operational": 4, 00:08:58.039 "base_bdevs_list": [ 00:08:58.039 { 00:08:58.039 "name": "BaseBdev1", 00:08:58.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.039 "is_configured": false, 00:08:58.039 "data_offset": 0, 00:08:58.039 "data_size": 0 00:08:58.039 }, 00:08:58.039 { 00:08:58.039 "name": null, 00:08:58.039 "uuid": "e3cf1d02-6103-4ce2-9ceb-af23efda6549", 00:08:58.039 "is_configured": false, 00:08:58.039 "data_offset": 0, 00:08:58.039 "data_size": 65536 00:08:58.039 }, 00:08:58.039 { 00:08:58.039 "name": "BaseBdev3", 00:08:58.039 "uuid": "bfa79f8f-ba5e-4704-b358-e6ef50c6e45a", 00:08:58.039 "is_configured": true, 00:08:58.039 "data_offset": 0, 00:08:58.039 "data_size": 65536 00:08:58.039 }, 00:08:58.039 { 00:08:58.039 "name": "BaseBdev4", 00:08:58.039 "uuid": "9530f800-c574-4f11-bd9b-45f89ec55a6f", 00:08:58.039 "is_configured": true, 00:08:58.039 "data_offset": 0, 00:08:58.039 "data_size": 65536 00:08:58.039 } 00:08:58.039 ] 00:08:58.039 }' 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.039 21:41:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.299 [2024-11-27 21:41:21.390616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.299 BaseBdev1 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.299 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.300 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.300 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.300 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.300 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:58.300 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.300 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.300 [ 00:08:58.300 { 00:08:58.300 "name": "BaseBdev1", 00:08:58.300 "aliases": [ 00:08:58.300 "36de47ce-7a08-45bf-8fbd-d3e0663d1da3" 00:08:58.300 ], 00:08:58.300 "product_name": "Malloc disk", 00:08:58.300 "block_size": 512, 00:08:58.300 "num_blocks": 65536, 00:08:58.300 "uuid": "36de47ce-7a08-45bf-8fbd-d3e0663d1da3", 00:08:58.300 "assigned_rate_limits": { 00:08:58.300 "rw_ios_per_sec": 0, 00:08:58.300 "rw_mbytes_per_sec": 0, 00:08:58.300 "r_mbytes_per_sec": 0, 00:08:58.300 "w_mbytes_per_sec": 0 00:08:58.300 }, 00:08:58.300 "claimed": true, 00:08:58.300 "claim_type": "exclusive_write", 00:08:58.300 "zoned": false, 00:08:58.560 "supported_io_types": { 00:08:58.560 "read": true, 00:08:58.560 "write": true, 00:08:58.560 "unmap": true, 00:08:58.560 "flush": true, 00:08:58.560 "reset": true, 00:08:58.560 "nvme_admin": false, 00:08:58.560 "nvme_io": false, 00:08:58.560 "nvme_io_md": false, 00:08:58.560 "write_zeroes": true, 00:08:58.560 "zcopy": true, 00:08:58.560 "get_zone_info": false, 00:08:58.560 "zone_management": false, 00:08:58.560 "zone_append": false, 00:08:58.560 "compare": false, 00:08:58.560 "compare_and_write": false, 00:08:58.560 "abort": true, 00:08:58.560 "seek_hole": false, 00:08:58.560 "seek_data": false, 00:08:58.560 "copy": true, 00:08:58.560 "nvme_iov_md": false 00:08:58.560 }, 00:08:58.560 "memory_domains": [ 00:08:58.560 { 00:08:58.560 "dma_device_id": "system", 00:08:58.560 "dma_device_type": 1 00:08:58.560 }, 00:08:58.560 { 00:08:58.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.560 "dma_device_type": 2 00:08:58.560 } 00:08:58.560 ], 00:08:58.560 "driver_specific": {} 00:08:58.560 } 00:08:58.560 ] 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.560 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.560 "name": "Existed_Raid", 00:08:58.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.560 "strip_size_kb": 64, 00:08:58.560 "state": "configuring", 00:08:58.560 "raid_level": "raid0", 00:08:58.560 "superblock": false, 00:08:58.560 "num_base_bdevs": 4, 00:08:58.560 "num_base_bdevs_discovered": 3, 00:08:58.560 "num_base_bdevs_operational": 4, 00:08:58.560 "base_bdevs_list": [ 00:08:58.560 { 00:08:58.560 "name": "BaseBdev1", 00:08:58.560 "uuid": "36de47ce-7a08-45bf-8fbd-d3e0663d1da3", 00:08:58.560 "is_configured": true, 00:08:58.560 "data_offset": 0, 00:08:58.560 "data_size": 65536 00:08:58.560 }, 00:08:58.560 { 00:08:58.560 "name": null, 00:08:58.560 "uuid": "e3cf1d02-6103-4ce2-9ceb-af23efda6549", 00:08:58.560 "is_configured": false, 00:08:58.560 "data_offset": 0, 00:08:58.560 "data_size": 65536 00:08:58.560 }, 00:08:58.560 { 00:08:58.560 "name": "BaseBdev3", 00:08:58.561 "uuid": "bfa79f8f-ba5e-4704-b358-e6ef50c6e45a", 00:08:58.561 "is_configured": true, 00:08:58.561 "data_offset": 0, 00:08:58.561 "data_size": 65536 00:08:58.561 }, 00:08:58.561 { 00:08:58.561 "name": "BaseBdev4", 00:08:58.561 "uuid": "9530f800-c574-4f11-bd9b-45f89ec55a6f", 00:08:58.561 "is_configured": true, 00:08:58.561 "data_offset": 0, 00:08:58.561 "data_size": 65536 00:08:58.561 } 00:08:58.561 ] 00:08:58.561 }' 00:08:58.561 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.561 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.821 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.821 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.821 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.821 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:58.821 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.081 [2024-11-27 21:41:21.957739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.081 21:41:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.081 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.081 "name": "Existed_Raid", 00:08:59.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.081 "strip_size_kb": 64, 00:08:59.081 "state": "configuring", 00:08:59.081 "raid_level": "raid0", 00:08:59.081 "superblock": false, 00:08:59.081 "num_base_bdevs": 4, 00:08:59.081 "num_base_bdevs_discovered": 2, 00:08:59.081 "num_base_bdevs_operational": 4, 00:08:59.081 "base_bdevs_list": [ 00:08:59.081 { 00:08:59.081 "name": "BaseBdev1", 00:08:59.081 "uuid": "36de47ce-7a08-45bf-8fbd-d3e0663d1da3", 00:08:59.081 "is_configured": true, 00:08:59.081 "data_offset": 0, 00:08:59.081 "data_size": 65536 00:08:59.081 }, 00:08:59.081 { 00:08:59.081 "name": null, 00:08:59.081 "uuid": "e3cf1d02-6103-4ce2-9ceb-af23efda6549", 00:08:59.081 "is_configured": false, 00:08:59.081 "data_offset": 0, 00:08:59.081 "data_size": 65536 00:08:59.081 }, 00:08:59.081 { 00:08:59.081 "name": null, 00:08:59.081 "uuid": "bfa79f8f-ba5e-4704-b358-e6ef50c6e45a", 00:08:59.081 "is_configured": false, 00:08:59.081 "data_offset": 0, 00:08:59.081 "data_size": 65536 00:08:59.081 }, 00:08:59.081 { 00:08:59.081 "name": "BaseBdev4", 00:08:59.081 "uuid": "9530f800-c574-4f11-bd9b-45f89ec55a6f", 00:08:59.081 "is_configured": true, 00:08:59.081 "data_offset": 0, 00:08:59.081 "data_size": 65536 00:08:59.081 } 00:08:59.081 ] 00:08:59.081 }' 00:08:59.081 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.081 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.341 [2024-11-27 21:41:22.440947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.341 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.600 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.600 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.600 "name": "Existed_Raid", 00:08:59.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.600 "strip_size_kb": 64, 00:08:59.600 "state": "configuring", 00:08:59.600 "raid_level": "raid0", 00:08:59.600 "superblock": false, 00:08:59.600 "num_base_bdevs": 4, 00:08:59.600 "num_base_bdevs_discovered": 3, 00:08:59.600 "num_base_bdevs_operational": 4, 00:08:59.600 "base_bdevs_list": [ 00:08:59.600 { 00:08:59.600 "name": "BaseBdev1", 00:08:59.600 "uuid": "36de47ce-7a08-45bf-8fbd-d3e0663d1da3", 00:08:59.600 "is_configured": true, 00:08:59.600 "data_offset": 0, 00:08:59.600 "data_size": 65536 00:08:59.600 }, 00:08:59.600 { 00:08:59.600 "name": null, 00:08:59.600 "uuid": "e3cf1d02-6103-4ce2-9ceb-af23efda6549", 00:08:59.600 "is_configured": false, 00:08:59.600 "data_offset": 0, 00:08:59.600 "data_size": 65536 00:08:59.600 }, 00:08:59.600 { 00:08:59.600 "name": "BaseBdev3", 00:08:59.600 "uuid": "bfa79f8f-ba5e-4704-b358-e6ef50c6e45a", 00:08:59.600 "is_configured": true, 00:08:59.600 "data_offset": 0, 00:08:59.600 "data_size": 65536 00:08:59.600 }, 00:08:59.600 { 00:08:59.600 "name": "BaseBdev4", 00:08:59.600 "uuid": "9530f800-c574-4f11-bd9b-45f89ec55a6f", 00:08:59.600 "is_configured": true, 00:08:59.600 "data_offset": 0, 00:08:59.600 "data_size": 65536 00:08:59.600 } 00:08:59.600 ] 00:08:59.600 }' 00:08:59.600 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.600 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.859 [2024-11-27 21:41:22.880236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.859 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.859 "name": "Existed_Raid", 00:08:59.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.859 "strip_size_kb": 64, 00:08:59.859 "state": "configuring", 00:08:59.859 "raid_level": "raid0", 00:08:59.859 "superblock": false, 00:08:59.859 "num_base_bdevs": 4, 00:08:59.859 "num_base_bdevs_discovered": 2, 00:08:59.859 "num_base_bdevs_operational": 4, 00:08:59.859 "base_bdevs_list": [ 00:08:59.859 { 00:08:59.859 "name": null, 00:08:59.859 "uuid": "36de47ce-7a08-45bf-8fbd-d3e0663d1da3", 00:08:59.859 "is_configured": false, 00:08:59.859 "data_offset": 0, 00:08:59.859 "data_size": 65536 00:08:59.859 }, 00:08:59.860 { 00:08:59.860 "name": null, 00:08:59.860 "uuid": "e3cf1d02-6103-4ce2-9ceb-af23efda6549", 00:08:59.860 "is_configured": false, 00:08:59.860 "data_offset": 0, 00:08:59.860 "data_size": 65536 00:08:59.860 }, 00:08:59.860 { 00:08:59.860 "name": "BaseBdev3", 00:08:59.860 "uuid": "bfa79f8f-ba5e-4704-b358-e6ef50c6e45a", 00:08:59.860 "is_configured": true, 00:08:59.860 "data_offset": 0, 00:08:59.860 "data_size": 65536 00:08:59.860 }, 00:08:59.860 { 00:08:59.860 "name": "BaseBdev4", 00:08:59.860 "uuid": "9530f800-c574-4f11-bd9b-45f89ec55a6f", 00:08:59.860 "is_configured": true, 00:08:59.860 "data_offset": 0, 00:08:59.860 "data_size": 65536 00:08:59.860 } 00:08:59.860 ] 00:08:59.860 }' 00:08:59.860 21:41:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.860 21:41:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.428 [2024-11-27 21:41:23.337715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.428 "name": "Existed_Raid", 00:09:00.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.428 "strip_size_kb": 64, 00:09:00.428 "state": "configuring", 00:09:00.428 "raid_level": "raid0", 00:09:00.428 "superblock": false, 00:09:00.428 "num_base_bdevs": 4, 00:09:00.428 "num_base_bdevs_discovered": 3, 00:09:00.428 "num_base_bdevs_operational": 4, 00:09:00.428 "base_bdevs_list": [ 00:09:00.428 { 00:09:00.428 "name": null, 00:09:00.428 "uuid": "36de47ce-7a08-45bf-8fbd-d3e0663d1da3", 00:09:00.428 "is_configured": false, 00:09:00.428 "data_offset": 0, 00:09:00.428 "data_size": 65536 00:09:00.428 }, 00:09:00.428 { 00:09:00.428 "name": "BaseBdev2", 00:09:00.428 "uuid": "e3cf1d02-6103-4ce2-9ceb-af23efda6549", 00:09:00.428 "is_configured": true, 00:09:00.428 "data_offset": 0, 00:09:00.428 "data_size": 65536 00:09:00.428 }, 00:09:00.428 { 00:09:00.428 "name": "BaseBdev3", 00:09:00.428 "uuid": "bfa79f8f-ba5e-4704-b358-e6ef50c6e45a", 00:09:00.428 "is_configured": true, 00:09:00.428 "data_offset": 0, 00:09:00.428 "data_size": 65536 00:09:00.428 }, 00:09:00.428 { 00:09:00.428 "name": "BaseBdev4", 00:09:00.428 "uuid": "9530f800-c574-4f11-bd9b-45f89ec55a6f", 00:09:00.428 "is_configured": true, 00:09:00.428 "data_offset": 0, 00:09:00.428 "data_size": 65536 00:09:00.428 } 00:09:00.428 ] 00:09:00.428 }' 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.428 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.688 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:00.688 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.688 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.688 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.688 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.688 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:00.688 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.688 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.688 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:00.688 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.688 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 36de47ce-7a08-45bf-8fbd-d3e0663d1da3 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.948 [2024-11-27 21:41:23.843915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:00.948 [2024-11-27 21:41:23.843958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:00.948 [2024-11-27 21:41:23.843966] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:00.948 [2024-11-27 21:41:23.844234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:00.948 [2024-11-27 21:41:23.844345] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:00.948 [2024-11-27 21:41:23.844356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:00.948 [2024-11-27 21:41:23.844527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.948 NewBaseBdev 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.948 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.948 [ 00:09:00.948 { 00:09:00.948 "name": "NewBaseBdev", 00:09:00.948 "aliases": [ 00:09:00.948 "36de47ce-7a08-45bf-8fbd-d3e0663d1da3" 00:09:00.948 ], 00:09:00.948 "product_name": "Malloc disk", 00:09:00.948 "block_size": 512, 00:09:00.948 "num_blocks": 65536, 00:09:00.948 "uuid": "36de47ce-7a08-45bf-8fbd-d3e0663d1da3", 00:09:00.948 "assigned_rate_limits": { 00:09:00.948 "rw_ios_per_sec": 0, 00:09:00.948 "rw_mbytes_per_sec": 0, 00:09:00.948 "r_mbytes_per_sec": 0, 00:09:00.948 "w_mbytes_per_sec": 0 00:09:00.948 }, 00:09:00.948 "claimed": true, 00:09:00.948 "claim_type": "exclusive_write", 00:09:00.948 "zoned": false, 00:09:00.948 "supported_io_types": { 00:09:00.948 "read": true, 00:09:00.948 "write": true, 00:09:00.948 "unmap": true, 00:09:00.948 "flush": true, 00:09:00.948 "reset": true, 00:09:00.948 "nvme_admin": false, 00:09:00.948 "nvme_io": false, 00:09:00.948 "nvme_io_md": false, 00:09:00.948 "write_zeroes": true, 00:09:00.948 "zcopy": true, 00:09:00.948 "get_zone_info": false, 00:09:00.948 "zone_management": false, 00:09:00.948 "zone_append": false, 00:09:00.948 "compare": false, 00:09:00.948 "compare_and_write": false, 00:09:00.948 "abort": true, 00:09:00.948 "seek_hole": false, 00:09:00.948 "seek_data": false, 00:09:00.948 "copy": true, 00:09:00.948 "nvme_iov_md": false 00:09:00.948 }, 00:09:00.948 "memory_domains": [ 00:09:00.948 { 00:09:00.948 "dma_device_id": "system", 00:09:00.948 "dma_device_type": 1 00:09:00.948 }, 00:09:00.949 { 00:09:00.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.949 "dma_device_type": 2 00:09:00.949 } 00:09:00.949 ], 00:09:00.949 "driver_specific": {} 00:09:00.949 } 00:09:00.949 ] 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.949 "name": "Existed_Raid", 00:09:00.949 "uuid": "15ce9a9a-347f-411a-9e13-3608ca0d26e1", 00:09:00.949 "strip_size_kb": 64, 00:09:00.949 "state": "online", 00:09:00.949 "raid_level": "raid0", 00:09:00.949 "superblock": false, 00:09:00.949 "num_base_bdevs": 4, 00:09:00.949 "num_base_bdevs_discovered": 4, 00:09:00.949 "num_base_bdevs_operational": 4, 00:09:00.949 "base_bdevs_list": [ 00:09:00.949 { 00:09:00.949 "name": "NewBaseBdev", 00:09:00.949 "uuid": "36de47ce-7a08-45bf-8fbd-d3e0663d1da3", 00:09:00.949 "is_configured": true, 00:09:00.949 "data_offset": 0, 00:09:00.949 "data_size": 65536 00:09:00.949 }, 00:09:00.949 { 00:09:00.949 "name": "BaseBdev2", 00:09:00.949 "uuid": "e3cf1d02-6103-4ce2-9ceb-af23efda6549", 00:09:00.949 "is_configured": true, 00:09:00.949 "data_offset": 0, 00:09:00.949 "data_size": 65536 00:09:00.949 }, 00:09:00.949 { 00:09:00.949 "name": "BaseBdev3", 00:09:00.949 "uuid": "bfa79f8f-ba5e-4704-b358-e6ef50c6e45a", 00:09:00.949 "is_configured": true, 00:09:00.949 "data_offset": 0, 00:09:00.949 "data_size": 65536 00:09:00.949 }, 00:09:00.949 { 00:09:00.949 "name": "BaseBdev4", 00:09:00.949 "uuid": "9530f800-c574-4f11-bd9b-45f89ec55a6f", 00:09:00.949 "is_configured": true, 00:09:00.949 "data_offset": 0, 00:09:00.949 "data_size": 65536 00:09:00.949 } 00:09:00.949 ] 00:09:00.949 }' 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.949 21:41:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.208 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:01.208 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:01.208 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:01.208 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:01.208 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:01.208 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:01.209 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:01.209 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:01.209 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.209 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.209 [2024-11-27 21:41:24.323462] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:01.468 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.468 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:01.468 "name": "Existed_Raid", 00:09:01.468 "aliases": [ 00:09:01.468 "15ce9a9a-347f-411a-9e13-3608ca0d26e1" 00:09:01.468 ], 00:09:01.469 "product_name": "Raid Volume", 00:09:01.469 "block_size": 512, 00:09:01.469 "num_blocks": 262144, 00:09:01.469 "uuid": "15ce9a9a-347f-411a-9e13-3608ca0d26e1", 00:09:01.469 "assigned_rate_limits": { 00:09:01.469 "rw_ios_per_sec": 0, 00:09:01.469 "rw_mbytes_per_sec": 0, 00:09:01.469 "r_mbytes_per_sec": 0, 00:09:01.469 "w_mbytes_per_sec": 0 00:09:01.469 }, 00:09:01.469 "claimed": false, 00:09:01.469 "zoned": false, 00:09:01.469 "supported_io_types": { 00:09:01.469 "read": true, 00:09:01.469 "write": true, 00:09:01.469 "unmap": true, 00:09:01.469 "flush": true, 00:09:01.469 "reset": true, 00:09:01.469 "nvme_admin": false, 00:09:01.469 "nvme_io": false, 00:09:01.469 "nvme_io_md": false, 00:09:01.469 "write_zeroes": true, 00:09:01.469 "zcopy": false, 00:09:01.469 "get_zone_info": false, 00:09:01.469 "zone_management": false, 00:09:01.469 "zone_append": false, 00:09:01.469 "compare": false, 00:09:01.469 "compare_and_write": false, 00:09:01.469 "abort": false, 00:09:01.469 "seek_hole": false, 00:09:01.469 "seek_data": false, 00:09:01.469 "copy": false, 00:09:01.469 "nvme_iov_md": false 00:09:01.469 }, 00:09:01.469 "memory_domains": [ 00:09:01.469 { 00:09:01.469 "dma_device_id": "system", 00:09:01.469 "dma_device_type": 1 00:09:01.469 }, 00:09:01.469 { 00:09:01.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.469 "dma_device_type": 2 00:09:01.469 }, 00:09:01.469 { 00:09:01.469 "dma_device_id": "system", 00:09:01.469 "dma_device_type": 1 00:09:01.469 }, 00:09:01.469 { 00:09:01.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.469 "dma_device_type": 2 00:09:01.469 }, 00:09:01.469 { 00:09:01.469 "dma_device_id": "system", 00:09:01.469 "dma_device_type": 1 00:09:01.469 }, 00:09:01.469 { 00:09:01.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.469 "dma_device_type": 2 00:09:01.469 }, 00:09:01.469 { 00:09:01.469 "dma_device_id": "system", 00:09:01.469 "dma_device_type": 1 00:09:01.469 }, 00:09:01.469 { 00:09:01.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.469 "dma_device_type": 2 00:09:01.469 } 00:09:01.469 ], 00:09:01.469 "driver_specific": { 00:09:01.469 "raid": { 00:09:01.469 "uuid": "15ce9a9a-347f-411a-9e13-3608ca0d26e1", 00:09:01.469 "strip_size_kb": 64, 00:09:01.469 "state": "online", 00:09:01.469 "raid_level": "raid0", 00:09:01.469 "superblock": false, 00:09:01.469 "num_base_bdevs": 4, 00:09:01.469 "num_base_bdevs_discovered": 4, 00:09:01.469 "num_base_bdevs_operational": 4, 00:09:01.469 "base_bdevs_list": [ 00:09:01.469 { 00:09:01.469 "name": "NewBaseBdev", 00:09:01.469 "uuid": "36de47ce-7a08-45bf-8fbd-d3e0663d1da3", 00:09:01.469 "is_configured": true, 00:09:01.469 "data_offset": 0, 00:09:01.469 "data_size": 65536 00:09:01.469 }, 00:09:01.469 { 00:09:01.469 "name": "BaseBdev2", 00:09:01.469 "uuid": "e3cf1d02-6103-4ce2-9ceb-af23efda6549", 00:09:01.469 "is_configured": true, 00:09:01.469 "data_offset": 0, 00:09:01.469 "data_size": 65536 00:09:01.469 }, 00:09:01.469 { 00:09:01.469 "name": "BaseBdev3", 00:09:01.469 "uuid": "bfa79f8f-ba5e-4704-b358-e6ef50c6e45a", 00:09:01.469 "is_configured": true, 00:09:01.469 "data_offset": 0, 00:09:01.469 "data_size": 65536 00:09:01.469 }, 00:09:01.469 { 00:09:01.469 "name": "BaseBdev4", 00:09:01.469 "uuid": "9530f800-c574-4f11-bd9b-45f89ec55a6f", 00:09:01.469 "is_configured": true, 00:09:01.469 "data_offset": 0, 00:09:01.469 "data_size": 65536 00:09:01.469 } 00:09:01.469 ] 00:09:01.469 } 00:09:01.469 } 00:09:01.469 }' 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:01.469 BaseBdev2 00:09:01.469 BaseBdev3 00:09:01.469 BaseBdev4' 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.469 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.730 [2024-11-27 21:41:24.634602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:01.730 [2024-11-27 21:41:24.634673] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:01.730 [2024-11-27 21:41:24.634755] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:01.730 [2024-11-27 21:41:24.634838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:01.730 [2024-11-27 21:41:24.634848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80074 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80074 ']' 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80074 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80074 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80074' 00:09:01.730 killing process with pid 80074 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 80074 00:09:01.730 [2024-11-27 21:41:24.684130] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:01.730 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 80074 00:09:01.730 [2024-11-27 21:41:24.723938] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.990 21:41:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:01.990 00:09:01.990 real 0m9.363s 00:09:01.990 user 0m16.064s 00:09:01.990 sys 0m1.933s 00:09:01.990 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.990 21:41:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.990 ************************************ 00:09:01.990 END TEST raid_state_function_test 00:09:01.990 ************************************ 00:09:01.990 21:41:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:01.990 21:41:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:01.990 21:41:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.990 21:41:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.990 ************************************ 00:09:01.990 START TEST raid_state_function_test_sb 00:09:01.990 ************************************ 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:01.990 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:01.991 Process raid pid: 80729 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80729 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80729' 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80729 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80729 ']' 00:09:01.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.991 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.991 [2024-11-27 21:41:25.095570] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:01.991 [2024-11-27 21:41:25.095707] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.251 [2024-11-27 21:41:25.250016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.251 [2024-11-27 21:41:25.274760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.251 [2024-11-27 21:41:25.316777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.251 [2024-11-27 21:41:25.316915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.821 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.821 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:02.821 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:02.821 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.821 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.821 [2024-11-27 21:41:25.935153] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.821 [2024-11-27 21:41:25.935254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.821 [2024-11-27 21:41:25.935300] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.821 [2024-11-27 21:41:25.935325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.821 [2024-11-27 21:41:25.935343] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:02.821 [2024-11-27 21:41:25.935368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:02.821 [2024-11-27 21:41:25.935422] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:02.821 [2024-11-27 21:41:25.935461] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:02.821 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.821 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:02.821 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.821 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.821 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.821 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.821 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:03.081 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.081 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.081 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.081 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.081 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.081 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.081 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.081 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.081 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.081 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.081 "name": "Existed_Raid", 00:09:03.081 "uuid": "d928d251-eb81-4dbb-8eab-18713ed33009", 00:09:03.081 "strip_size_kb": 64, 00:09:03.081 "state": "configuring", 00:09:03.081 "raid_level": "raid0", 00:09:03.081 "superblock": true, 00:09:03.081 "num_base_bdevs": 4, 00:09:03.081 "num_base_bdevs_discovered": 0, 00:09:03.081 "num_base_bdevs_operational": 4, 00:09:03.081 "base_bdevs_list": [ 00:09:03.081 { 00:09:03.081 "name": "BaseBdev1", 00:09:03.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.081 "is_configured": false, 00:09:03.081 "data_offset": 0, 00:09:03.081 "data_size": 0 00:09:03.081 }, 00:09:03.081 { 00:09:03.081 "name": "BaseBdev2", 00:09:03.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.081 "is_configured": false, 00:09:03.081 "data_offset": 0, 00:09:03.081 "data_size": 0 00:09:03.081 }, 00:09:03.081 { 00:09:03.081 "name": "BaseBdev3", 00:09:03.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.081 "is_configured": false, 00:09:03.081 "data_offset": 0, 00:09:03.081 "data_size": 0 00:09:03.081 }, 00:09:03.081 { 00:09:03.081 "name": "BaseBdev4", 00:09:03.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.081 "is_configured": false, 00:09:03.081 "data_offset": 0, 00:09:03.081 "data_size": 0 00:09:03.081 } 00:09:03.081 ] 00:09:03.081 }' 00:09:03.081 21:41:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.081 21:41:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.342 [2024-11-27 21:41:26.334379] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.342 [2024-11-27 21:41:26.334468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.342 [2024-11-27 21:41:26.342394] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:03.342 [2024-11-27 21:41:26.342470] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:03.342 [2024-11-27 21:41:26.342497] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.342 [2024-11-27 21:41:26.342518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.342 [2024-11-27 21:41:26.342536] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:03.342 [2024-11-27 21:41:26.342556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:03.342 [2024-11-27 21:41:26.342573] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:03.342 [2024-11-27 21:41:26.342593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.342 [2024-11-27 21:41:26.363106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.342 BaseBdev1 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.342 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.342 [ 00:09:03.343 { 00:09:03.343 "name": "BaseBdev1", 00:09:03.343 "aliases": [ 00:09:03.343 "17382b5f-d28d-42f7-9d43-6322d8aa15d7" 00:09:03.343 ], 00:09:03.343 "product_name": "Malloc disk", 00:09:03.343 "block_size": 512, 00:09:03.343 "num_blocks": 65536, 00:09:03.343 "uuid": "17382b5f-d28d-42f7-9d43-6322d8aa15d7", 00:09:03.343 "assigned_rate_limits": { 00:09:03.343 "rw_ios_per_sec": 0, 00:09:03.343 "rw_mbytes_per_sec": 0, 00:09:03.343 "r_mbytes_per_sec": 0, 00:09:03.343 "w_mbytes_per_sec": 0 00:09:03.343 }, 00:09:03.343 "claimed": true, 00:09:03.343 "claim_type": "exclusive_write", 00:09:03.343 "zoned": false, 00:09:03.343 "supported_io_types": { 00:09:03.343 "read": true, 00:09:03.343 "write": true, 00:09:03.343 "unmap": true, 00:09:03.343 "flush": true, 00:09:03.343 "reset": true, 00:09:03.343 "nvme_admin": false, 00:09:03.343 "nvme_io": false, 00:09:03.343 "nvme_io_md": false, 00:09:03.343 "write_zeroes": true, 00:09:03.343 "zcopy": true, 00:09:03.343 "get_zone_info": false, 00:09:03.343 "zone_management": false, 00:09:03.343 "zone_append": false, 00:09:03.343 "compare": false, 00:09:03.343 "compare_and_write": false, 00:09:03.343 "abort": true, 00:09:03.343 "seek_hole": false, 00:09:03.343 "seek_data": false, 00:09:03.343 "copy": true, 00:09:03.343 "nvme_iov_md": false 00:09:03.343 }, 00:09:03.343 "memory_domains": [ 00:09:03.343 { 00:09:03.343 "dma_device_id": "system", 00:09:03.343 "dma_device_type": 1 00:09:03.343 }, 00:09:03.343 { 00:09:03.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.343 "dma_device_type": 2 00:09:03.343 } 00:09:03.343 ], 00:09:03.343 "driver_specific": {} 00:09:03.343 } 00:09:03.343 ] 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.343 "name": "Existed_Raid", 00:09:03.343 "uuid": "ce61a160-8fa8-4bdb-aad5-fba92111287a", 00:09:03.343 "strip_size_kb": 64, 00:09:03.343 "state": "configuring", 00:09:03.343 "raid_level": "raid0", 00:09:03.343 "superblock": true, 00:09:03.343 "num_base_bdevs": 4, 00:09:03.343 "num_base_bdevs_discovered": 1, 00:09:03.343 "num_base_bdevs_operational": 4, 00:09:03.343 "base_bdevs_list": [ 00:09:03.343 { 00:09:03.343 "name": "BaseBdev1", 00:09:03.343 "uuid": "17382b5f-d28d-42f7-9d43-6322d8aa15d7", 00:09:03.343 "is_configured": true, 00:09:03.343 "data_offset": 2048, 00:09:03.343 "data_size": 63488 00:09:03.343 }, 00:09:03.343 { 00:09:03.343 "name": "BaseBdev2", 00:09:03.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.343 "is_configured": false, 00:09:03.343 "data_offset": 0, 00:09:03.343 "data_size": 0 00:09:03.343 }, 00:09:03.343 { 00:09:03.343 "name": "BaseBdev3", 00:09:03.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.343 "is_configured": false, 00:09:03.343 "data_offset": 0, 00:09:03.343 "data_size": 0 00:09:03.343 }, 00:09:03.343 { 00:09:03.343 "name": "BaseBdev4", 00:09:03.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.343 "is_configured": false, 00:09:03.343 "data_offset": 0, 00:09:03.343 "data_size": 0 00:09:03.343 } 00:09:03.343 ] 00:09:03.343 }' 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.343 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.913 [2024-11-27 21:41:26.878272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.913 [2024-11-27 21:41:26.878375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.913 [2024-11-27 21:41:26.886296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.913 [2024-11-27 21:41:26.888163] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.913 [2024-11-27 21:41:26.888230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.913 [2024-11-27 21:41:26.888270] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:03.913 [2024-11-27 21:41:26.888307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:03.913 [2024-11-27 21:41:26.888334] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:03.913 [2024-11-27 21:41:26.888372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.913 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.914 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.914 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.914 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.914 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.914 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.914 "name": "Existed_Raid", 00:09:03.914 "uuid": "8f4293c9-226c-4ba8-96cc-8c856f29fd5c", 00:09:03.914 "strip_size_kb": 64, 00:09:03.914 "state": "configuring", 00:09:03.914 "raid_level": "raid0", 00:09:03.914 "superblock": true, 00:09:03.914 "num_base_bdevs": 4, 00:09:03.914 "num_base_bdevs_discovered": 1, 00:09:03.914 "num_base_bdevs_operational": 4, 00:09:03.914 "base_bdevs_list": [ 00:09:03.914 { 00:09:03.914 "name": "BaseBdev1", 00:09:03.914 "uuid": "17382b5f-d28d-42f7-9d43-6322d8aa15d7", 00:09:03.914 "is_configured": true, 00:09:03.914 "data_offset": 2048, 00:09:03.914 "data_size": 63488 00:09:03.914 }, 00:09:03.914 { 00:09:03.914 "name": "BaseBdev2", 00:09:03.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.914 "is_configured": false, 00:09:03.914 "data_offset": 0, 00:09:03.914 "data_size": 0 00:09:03.914 }, 00:09:03.914 { 00:09:03.914 "name": "BaseBdev3", 00:09:03.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.914 "is_configured": false, 00:09:03.914 "data_offset": 0, 00:09:03.914 "data_size": 0 00:09:03.914 }, 00:09:03.914 { 00:09:03.914 "name": "BaseBdev4", 00:09:03.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.914 "is_configured": false, 00:09:03.914 "data_offset": 0, 00:09:03.914 "data_size": 0 00:09:03.914 } 00:09:03.914 ] 00:09:03.914 }' 00:09:03.914 21:41:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.914 21:41:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.483 [2024-11-27 21:41:27.336309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:04.483 BaseBdev2 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.483 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.483 [ 00:09:04.483 { 00:09:04.483 "name": "BaseBdev2", 00:09:04.483 "aliases": [ 00:09:04.483 "ecf4539e-0e2a-4e5b-98c9-3dcfe67a9659" 00:09:04.483 ], 00:09:04.483 "product_name": "Malloc disk", 00:09:04.484 "block_size": 512, 00:09:04.484 "num_blocks": 65536, 00:09:04.484 "uuid": "ecf4539e-0e2a-4e5b-98c9-3dcfe67a9659", 00:09:04.484 "assigned_rate_limits": { 00:09:04.484 "rw_ios_per_sec": 0, 00:09:04.484 "rw_mbytes_per_sec": 0, 00:09:04.484 "r_mbytes_per_sec": 0, 00:09:04.484 "w_mbytes_per_sec": 0 00:09:04.484 }, 00:09:04.484 "claimed": true, 00:09:04.484 "claim_type": "exclusive_write", 00:09:04.484 "zoned": false, 00:09:04.484 "supported_io_types": { 00:09:04.484 "read": true, 00:09:04.484 "write": true, 00:09:04.484 "unmap": true, 00:09:04.484 "flush": true, 00:09:04.484 "reset": true, 00:09:04.484 "nvme_admin": false, 00:09:04.484 "nvme_io": false, 00:09:04.484 "nvme_io_md": false, 00:09:04.484 "write_zeroes": true, 00:09:04.484 "zcopy": true, 00:09:04.484 "get_zone_info": false, 00:09:04.484 "zone_management": false, 00:09:04.484 "zone_append": false, 00:09:04.484 "compare": false, 00:09:04.484 "compare_and_write": false, 00:09:04.484 "abort": true, 00:09:04.484 "seek_hole": false, 00:09:04.484 "seek_data": false, 00:09:04.484 "copy": true, 00:09:04.484 "nvme_iov_md": false 00:09:04.484 }, 00:09:04.484 "memory_domains": [ 00:09:04.484 { 00:09:04.484 "dma_device_id": "system", 00:09:04.484 "dma_device_type": 1 00:09:04.484 }, 00:09:04.484 { 00:09:04.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.484 "dma_device_type": 2 00:09:04.484 } 00:09:04.484 ], 00:09:04.484 "driver_specific": {} 00:09:04.484 } 00:09:04.484 ] 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.484 "name": "Existed_Raid", 00:09:04.484 "uuid": "8f4293c9-226c-4ba8-96cc-8c856f29fd5c", 00:09:04.484 "strip_size_kb": 64, 00:09:04.484 "state": "configuring", 00:09:04.484 "raid_level": "raid0", 00:09:04.484 "superblock": true, 00:09:04.484 "num_base_bdevs": 4, 00:09:04.484 "num_base_bdevs_discovered": 2, 00:09:04.484 "num_base_bdevs_operational": 4, 00:09:04.484 "base_bdevs_list": [ 00:09:04.484 { 00:09:04.484 "name": "BaseBdev1", 00:09:04.484 "uuid": "17382b5f-d28d-42f7-9d43-6322d8aa15d7", 00:09:04.484 "is_configured": true, 00:09:04.484 "data_offset": 2048, 00:09:04.484 "data_size": 63488 00:09:04.484 }, 00:09:04.484 { 00:09:04.484 "name": "BaseBdev2", 00:09:04.484 "uuid": "ecf4539e-0e2a-4e5b-98c9-3dcfe67a9659", 00:09:04.484 "is_configured": true, 00:09:04.484 "data_offset": 2048, 00:09:04.484 "data_size": 63488 00:09:04.484 }, 00:09:04.484 { 00:09:04.484 "name": "BaseBdev3", 00:09:04.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.484 "is_configured": false, 00:09:04.484 "data_offset": 0, 00:09:04.484 "data_size": 0 00:09:04.484 }, 00:09:04.484 { 00:09:04.484 "name": "BaseBdev4", 00:09:04.484 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.484 "is_configured": false, 00:09:04.484 "data_offset": 0, 00:09:04.484 "data_size": 0 00:09:04.484 } 00:09:04.484 ] 00:09:04.484 }' 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.484 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.744 [2024-11-27 21:41:27.795786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:04.744 BaseBdev3 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.744 [ 00:09:04.744 { 00:09:04.744 "name": "BaseBdev3", 00:09:04.744 "aliases": [ 00:09:04.744 "5be7845c-bbf6-47fe-bd2e-6d7d04f383e0" 00:09:04.744 ], 00:09:04.744 "product_name": "Malloc disk", 00:09:04.744 "block_size": 512, 00:09:04.744 "num_blocks": 65536, 00:09:04.744 "uuid": "5be7845c-bbf6-47fe-bd2e-6d7d04f383e0", 00:09:04.744 "assigned_rate_limits": { 00:09:04.744 "rw_ios_per_sec": 0, 00:09:04.744 "rw_mbytes_per_sec": 0, 00:09:04.744 "r_mbytes_per_sec": 0, 00:09:04.744 "w_mbytes_per_sec": 0 00:09:04.744 }, 00:09:04.744 "claimed": true, 00:09:04.744 "claim_type": "exclusive_write", 00:09:04.744 "zoned": false, 00:09:04.744 "supported_io_types": { 00:09:04.744 "read": true, 00:09:04.744 "write": true, 00:09:04.744 "unmap": true, 00:09:04.744 "flush": true, 00:09:04.744 "reset": true, 00:09:04.744 "nvme_admin": false, 00:09:04.744 "nvme_io": false, 00:09:04.744 "nvme_io_md": false, 00:09:04.744 "write_zeroes": true, 00:09:04.744 "zcopy": true, 00:09:04.744 "get_zone_info": false, 00:09:04.744 "zone_management": false, 00:09:04.744 "zone_append": false, 00:09:04.744 "compare": false, 00:09:04.744 "compare_and_write": false, 00:09:04.744 "abort": true, 00:09:04.744 "seek_hole": false, 00:09:04.744 "seek_data": false, 00:09:04.744 "copy": true, 00:09:04.744 "nvme_iov_md": false 00:09:04.744 }, 00:09:04.744 "memory_domains": [ 00:09:04.744 { 00:09:04.744 "dma_device_id": "system", 00:09:04.744 "dma_device_type": 1 00:09:04.744 }, 00:09:04.744 { 00:09:04.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.744 "dma_device_type": 2 00:09:04.744 } 00:09:04.744 ], 00:09:04.744 "driver_specific": {} 00:09:04.744 } 00:09:04.744 ] 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.744 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.005 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.005 "name": "Existed_Raid", 00:09:05.005 "uuid": "8f4293c9-226c-4ba8-96cc-8c856f29fd5c", 00:09:05.005 "strip_size_kb": 64, 00:09:05.005 "state": "configuring", 00:09:05.005 "raid_level": "raid0", 00:09:05.005 "superblock": true, 00:09:05.005 "num_base_bdevs": 4, 00:09:05.005 "num_base_bdevs_discovered": 3, 00:09:05.005 "num_base_bdevs_operational": 4, 00:09:05.005 "base_bdevs_list": [ 00:09:05.005 { 00:09:05.005 "name": "BaseBdev1", 00:09:05.005 "uuid": "17382b5f-d28d-42f7-9d43-6322d8aa15d7", 00:09:05.005 "is_configured": true, 00:09:05.005 "data_offset": 2048, 00:09:05.005 "data_size": 63488 00:09:05.005 }, 00:09:05.005 { 00:09:05.005 "name": "BaseBdev2", 00:09:05.005 "uuid": "ecf4539e-0e2a-4e5b-98c9-3dcfe67a9659", 00:09:05.005 "is_configured": true, 00:09:05.005 "data_offset": 2048, 00:09:05.005 "data_size": 63488 00:09:05.005 }, 00:09:05.005 { 00:09:05.005 "name": "BaseBdev3", 00:09:05.005 "uuid": "5be7845c-bbf6-47fe-bd2e-6d7d04f383e0", 00:09:05.005 "is_configured": true, 00:09:05.005 "data_offset": 2048, 00:09:05.005 "data_size": 63488 00:09:05.005 }, 00:09:05.005 { 00:09:05.005 "name": "BaseBdev4", 00:09:05.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.005 "is_configured": false, 00:09:05.005 "data_offset": 0, 00:09:05.005 "data_size": 0 00:09:05.005 } 00:09:05.005 ] 00:09:05.005 }' 00:09:05.005 21:41:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.005 21:41:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.265 [2024-11-27 21:41:28.333845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:05.265 [2024-11-27 21:41:28.334052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:05.265 [2024-11-27 21:41:28.334066] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:05.265 [2024-11-27 21:41:28.334350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:05.265 BaseBdev4 00:09:05.265 [2024-11-27 21:41:28.334480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:05.265 [2024-11-27 21:41:28.334492] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:05.265 [2024-11-27 21:41:28.334611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.265 [ 00:09:05.265 { 00:09:05.265 "name": "BaseBdev4", 00:09:05.265 "aliases": [ 00:09:05.265 "2c14d91f-d256-4d45-92d4-dd233a9be809" 00:09:05.265 ], 00:09:05.265 "product_name": "Malloc disk", 00:09:05.265 "block_size": 512, 00:09:05.265 "num_blocks": 65536, 00:09:05.265 "uuid": "2c14d91f-d256-4d45-92d4-dd233a9be809", 00:09:05.265 "assigned_rate_limits": { 00:09:05.265 "rw_ios_per_sec": 0, 00:09:05.265 "rw_mbytes_per_sec": 0, 00:09:05.265 "r_mbytes_per_sec": 0, 00:09:05.265 "w_mbytes_per_sec": 0 00:09:05.265 }, 00:09:05.265 "claimed": true, 00:09:05.265 "claim_type": "exclusive_write", 00:09:05.265 "zoned": false, 00:09:05.265 "supported_io_types": { 00:09:05.265 "read": true, 00:09:05.265 "write": true, 00:09:05.265 "unmap": true, 00:09:05.265 "flush": true, 00:09:05.265 "reset": true, 00:09:05.265 "nvme_admin": false, 00:09:05.265 "nvme_io": false, 00:09:05.265 "nvme_io_md": false, 00:09:05.265 "write_zeroes": true, 00:09:05.265 "zcopy": true, 00:09:05.265 "get_zone_info": false, 00:09:05.265 "zone_management": false, 00:09:05.265 "zone_append": false, 00:09:05.265 "compare": false, 00:09:05.265 "compare_and_write": false, 00:09:05.265 "abort": true, 00:09:05.265 "seek_hole": false, 00:09:05.265 "seek_data": false, 00:09:05.265 "copy": true, 00:09:05.265 "nvme_iov_md": false 00:09:05.265 }, 00:09:05.265 "memory_domains": [ 00:09:05.265 { 00:09:05.265 "dma_device_id": "system", 00:09:05.265 "dma_device_type": 1 00:09:05.265 }, 00:09:05.265 { 00:09:05.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.265 "dma_device_type": 2 00:09:05.265 } 00:09:05.265 ], 00:09:05.265 "driver_specific": {} 00:09:05.265 } 00:09:05.265 ] 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.265 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.524 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.524 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.524 "name": "Existed_Raid", 00:09:05.524 "uuid": "8f4293c9-226c-4ba8-96cc-8c856f29fd5c", 00:09:05.524 "strip_size_kb": 64, 00:09:05.524 "state": "online", 00:09:05.524 "raid_level": "raid0", 00:09:05.524 "superblock": true, 00:09:05.524 "num_base_bdevs": 4, 00:09:05.524 "num_base_bdevs_discovered": 4, 00:09:05.524 "num_base_bdevs_operational": 4, 00:09:05.524 "base_bdevs_list": [ 00:09:05.524 { 00:09:05.524 "name": "BaseBdev1", 00:09:05.524 "uuid": "17382b5f-d28d-42f7-9d43-6322d8aa15d7", 00:09:05.524 "is_configured": true, 00:09:05.524 "data_offset": 2048, 00:09:05.524 "data_size": 63488 00:09:05.524 }, 00:09:05.524 { 00:09:05.524 "name": "BaseBdev2", 00:09:05.524 "uuid": "ecf4539e-0e2a-4e5b-98c9-3dcfe67a9659", 00:09:05.524 "is_configured": true, 00:09:05.524 "data_offset": 2048, 00:09:05.524 "data_size": 63488 00:09:05.524 }, 00:09:05.524 { 00:09:05.524 "name": "BaseBdev3", 00:09:05.524 "uuid": "5be7845c-bbf6-47fe-bd2e-6d7d04f383e0", 00:09:05.524 "is_configured": true, 00:09:05.524 "data_offset": 2048, 00:09:05.524 "data_size": 63488 00:09:05.525 }, 00:09:05.525 { 00:09:05.525 "name": "BaseBdev4", 00:09:05.525 "uuid": "2c14d91f-d256-4d45-92d4-dd233a9be809", 00:09:05.525 "is_configured": true, 00:09:05.525 "data_offset": 2048, 00:09:05.525 "data_size": 63488 00:09:05.525 } 00:09:05.525 ] 00:09:05.525 }' 00:09:05.525 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.525 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.785 [2024-11-27 21:41:28.777500] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.785 "name": "Existed_Raid", 00:09:05.785 "aliases": [ 00:09:05.785 "8f4293c9-226c-4ba8-96cc-8c856f29fd5c" 00:09:05.785 ], 00:09:05.785 "product_name": "Raid Volume", 00:09:05.785 "block_size": 512, 00:09:05.785 "num_blocks": 253952, 00:09:05.785 "uuid": "8f4293c9-226c-4ba8-96cc-8c856f29fd5c", 00:09:05.785 "assigned_rate_limits": { 00:09:05.785 "rw_ios_per_sec": 0, 00:09:05.785 "rw_mbytes_per_sec": 0, 00:09:05.785 "r_mbytes_per_sec": 0, 00:09:05.785 "w_mbytes_per_sec": 0 00:09:05.785 }, 00:09:05.785 "claimed": false, 00:09:05.785 "zoned": false, 00:09:05.785 "supported_io_types": { 00:09:05.785 "read": true, 00:09:05.785 "write": true, 00:09:05.785 "unmap": true, 00:09:05.785 "flush": true, 00:09:05.785 "reset": true, 00:09:05.785 "nvme_admin": false, 00:09:05.785 "nvme_io": false, 00:09:05.785 "nvme_io_md": false, 00:09:05.785 "write_zeroes": true, 00:09:05.785 "zcopy": false, 00:09:05.785 "get_zone_info": false, 00:09:05.785 "zone_management": false, 00:09:05.785 "zone_append": false, 00:09:05.785 "compare": false, 00:09:05.785 "compare_and_write": false, 00:09:05.785 "abort": false, 00:09:05.785 "seek_hole": false, 00:09:05.785 "seek_data": false, 00:09:05.785 "copy": false, 00:09:05.785 "nvme_iov_md": false 00:09:05.785 }, 00:09:05.785 "memory_domains": [ 00:09:05.785 { 00:09:05.785 "dma_device_id": "system", 00:09:05.785 "dma_device_type": 1 00:09:05.785 }, 00:09:05.785 { 00:09:05.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.785 "dma_device_type": 2 00:09:05.785 }, 00:09:05.785 { 00:09:05.785 "dma_device_id": "system", 00:09:05.785 "dma_device_type": 1 00:09:05.785 }, 00:09:05.785 { 00:09:05.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.785 "dma_device_type": 2 00:09:05.785 }, 00:09:05.785 { 00:09:05.785 "dma_device_id": "system", 00:09:05.785 "dma_device_type": 1 00:09:05.785 }, 00:09:05.785 { 00:09:05.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.785 "dma_device_type": 2 00:09:05.785 }, 00:09:05.785 { 00:09:05.785 "dma_device_id": "system", 00:09:05.785 "dma_device_type": 1 00:09:05.785 }, 00:09:05.785 { 00:09:05.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.785 "dma_device_type": 2 00:09:05.785 } 00:09:05.785 ], 00:09:05.785 "driver_specific": { 00:09:05.785 "raid": { 00:09:05.785 "uuid": "8f4293c9-226c-4ba8-96cc-8c856f29fd5c", 00:09:05.785 "strip_size_kb": 64, 00:09:05.785 "state": "online", 00:09:05.785 "raid_level": "raid0", 00:09:05.785 "superblock": true, 00:09:05.785 "num_base_bdevs": 4, 00:09:05.785 "num_base_bdevs_discovered": 4, 00:09:05.785 "num_base_bdevs_operational": 4, 00:09:05.785 "base_bdevs_list": [ 00:09:05.785 { 00:09:05.785 "name": "BaseBdev1", 00:09:05.785 "uuid": "17382b5f-d28d-42f7-9d43-6322d8aa15d7", 00:09:05.785 "is_configured": true, 00:09:05.785 "data_offset": 2048, 00:09:05.785 "data_size": 63488 00:09:05.785 }, 00:09:05.785 { 00:09:05.785 "name": "BaseBdev2", 00:09:05.785 "uuid": "ecf4539e-0e2a-4e5b-98c9-3dcfe67a9659", 00:09:05.785 "is_configured": true, 00:09:05.785 "data_offset": 2048, 00:09:05.785 "data_size": 63488 00:09:05.785 }, 00:09:05.785 { 00:09:05.785 "name": "BaseBdev3", 00:09:05.785 "uuid": "5be7845c-bbf6-47fe-bd2e-6d7d04f383e0", 00:09:05.785 "is_configured": true, 00:09:05.785 "data_offset": 2048, 00:09:05.785 "data_size": 63488 00:09:05.785 }, 00:09:05.785 { 00:09:05.785 "name": "BaseBdev4", 00:09:05.785 "uuid": "2c14d91f-d256-4d45-92d4-dd233a9be809", 00:09:05.785 "is_configured": true, 00:09:05.785 "data_offset": 2048, 00:09:05.785 "data_size": 63488 00:09:05.785 } 00:09:05.785 ] 00:09:05.785 } 00:09:05.785 } 00:09:05.785 }' 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:05.785 BaseBdev2 00:09:05.785 BaseBdev3 00:09:05.785 BaseBdev4' 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.785 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.046 21:41:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.046 [2024-11-27 21:41:29.084667] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:06.046 [2024-11-27 21:41:29.084740] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:06.046 [2024-11-27 21:41:29.084815] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.046 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.046 "name": "Existed_Raid", 00:09:06.046 "uuid": "8f4293c9-226c-4ba8-96cc-8c856f29fd5c", 00:09:06.046 "strip_size_kb": 64, 00:09:06.046 "state": "offline", 00:09:06.046 "raid_level": "raid0", 00:09:06.046 "superblock": true, 00:09:06.046 "num_base_bdevs": 4, 00:09:06.046 "num_base_bdevs_discovered": 3, 00:09:06.046 "num_base_bdevs_operational": 3, 00:09:06.046 "base_bdevs_list": [ 00:09:06.046 { 00:09:06.046 "name": null, 00:09:06.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:06.046 "is_configured": false, 00:09:06.046 "data_offset": 0, 00:09:06.046 "data_size": 63488 00:09:06.046 }, 00:09:06.046 { 00:09:06.046 "name": "BaseBdev2", 00:09:06.046 "uuid": "ecf4539e-0e2a-4e5b-98c9-3dcfe67a9659", 00:09:06.046 "is_configured": true, 00:09:06.046 "data_offset": 2048, 00:09:06.046 "data_size": 63488 00:09:06.046 }, 00:09:06.046 { 00:09:06.046 "name": "BaseBdev3", 00:09:06.046 "uuid": "5be7845c-bbf6-47fe-bd2e-6d7d04f383e0", 00:09:06.046 "is_configured": true, 00:09:06.046 "data_offset": 2048, 00:09:06.046 "data_size": 63488 00:09:06.046 }, 00:09:06.046 { 00:09:06.046 "name": "BaseBdev4", 00:09:06.047 "uuid": "2c14d91f-d256-4d45-92d4-dd233a9be809", 00:09:06.047 "is_configured": true, 00:09:06.047 "data_offset": 2048, 00:09:06.047 "data_size": 63488 00:09:06.047 } 00:09:06.047 ] 00:09:06.047 }' 00:09:06.047 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.047 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.615 [2024-11-27 21:41:29.590979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.615 [2024-11-27 21:41:29.657943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.615 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.615 [2024-11-27 21:41:29.728924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:06.615 [2024-11-27 21:41:29.728969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.876 BaseBdev2 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.876 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.876 [ 00:09:06.876 { 00:09:06.876 "name": "BaseBdev2", 00:09:06.876 "aliases": [ 00:09:06.876 "7b230e9b-2bad-4a8a-b2da-60c6f1128961" 00:09:06.876 ], 00:09:06.876 "product_name": "Malloc disk", 00:09:06.876 "block_size": 512, 00:09:06.876 "num_blocks": 65536, 00:09:06.876 "uuid": "7b230e9b-2bad-4a8a-b2da-60c6f1128961", 00:09:06.876 "assigned_rate_limits": { 00:09:06.876 "rw_ios_per_sec": 0, 00:09:06.876 "rw_mbytes_per_sec": 0, 00:09:06.876 "r_mbytes_per_sec": 0, 00:09:06.876 "w_mbytes_per_sec": 0 00:09:06.876 }, 00:09:06.876 "claimed": false, 00:09:06.876 "zoned": false, 00:09:06.876 "supported_io_types": { 00:09:06.876 "read": true, 00:09:06.876 "write": true, 00:09:06.876 "unmap": true, 00:09:06.876 "flush": true, 00:09:06.876 "reset": true, 00:09:06.876 "nvme_admin": false, 00:09:06.877 "nvme_io": false, 00:09:06.877 "nvme_io_md": false, 00:09:06.877 "write_zeroes": true, 00:09:06.877 "zcopy": true, 00:09:06.877 "get_zone_info": false, 00:09:06.877 "zone_management": false, 00:09:06.877 "zone_append": false, 00:09:06.877 "compare": false, 00:09:06.877 "compare_and_write": false, 00:09:06.877 "abort": true, 00:09:06.877 "seek_hole": false, 00:09:06.877 "seek_data": false, 00:09:06.877 "copy": true, 00:09:06.877 "nvme_iov_md": false 00:09:06.877 }, 00:09:06.877 "memory_domains": [ 00:09:06.877 { 00:09:06.877 "dma_device_id": "system", 00:09:06.877 "dma_device_type": 1 00:09:06.877 }, 00:09:06.877 { 00:09:06.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.877 "dma_device_type": 2 00:09:06.877 } 00:09:06.877 ], 00:09:06.877 "driver_specific": {} 00:09:06.877 } 00:09:06.877 ] 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.877 BaseBdev3 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.877 [ 00:09:06.877 { 00:09:06.877 "name": "BaseBdev3", 00:09:06.877 "aliases": [ 00:09:06.877 "45db62a3-4dff-4aad-a50c-b7d3b140ae98" 00:09:06.877 ], 00:09:06.877 "product_name": "Malloc disk", 00:09:06.877 "block_size": 512, 00:09:06.877 "num_blocks": 65536, 00:09:06.877 "uuid": "45db62a3-4dff-4aad-a50c-b7d3b140ae98", 00:09:06.877 "assigned_rate_limits": { 00:09:06.877 "rw_ios_per_sec": 0, 00:09:06.877 "rw_mbytes_per_sec": 0, 00:09:06.877 "r_mbytes_per_sec": 0, 00:09:06.877 "w_mbytes_per_sec": 0 00:09:06.877 }, 00:09:06.877 "claimed": false, 00:09:06.877 "zoned": false, 00:09:06.877 "supported_io_types": { 00:09:06.877 "read": true, 00:09:06.877 "write": true, 00:09:06.877 "unmap": true, 00:09:06.877 "flush": true, 00:09:06.877 "reset": true, 00:09:06.877 "nvme_admin": false, 00:09:06.877 "nvme_io": false, 00:09:06.877 "nvme_io_md": false, 00:09:06.877 "write_zeroes": true, 00:09:06.877 "zcopy": true, 00:09:06.877 "get_zone_info": false, 00:09:06.877 "zone_management": false, 00:09:06.877 "zone_append": false, 00:09:06.877 "compare": false, 00:09:06.877 "compare_and_write": false, 00:09:06.877 "abort": true, 00:09:06.877 "seek_hole": false, 00:09:06.877 "seek_data": false, 00:09:06.877 "copy": true, 00:09:06.877 "nvme_iov_md": false 00:09:06.877 }, 00:09:06.877 "memory_domains": [ 00:09:06.877 { 00:09:06.877 "dma_device_id": "system", 00:09:06.877 "dma_device_type": 1 00:09:06.877 }, 00:09:06.877 { 00:09:06.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.877 "dma_device_type": 2 00:09:06.877 } 00:09:06.877 ], 00:09:06.877 "driver_specific": {} 00:09:06.877 } 00:09:06.877 ] 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.877 BaseBdev4 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.877 [ 00:09:06.877 { 00:09:06.877 "name": "BaseBdev4", 00:09:06.877 "aliases": [ 00:09:06.877 "d05c814a-b099-42ac-9642-561de0b5e24a" 00:09:06.877 ], 00:09:06.877 "product_name": "Malloc disk", 00:09:06.877 "block_size": 512, 00:09:06.877 "num_blocks": 65536, 00:09:06.877 "uuid": "d05c814a-b099-42ac-9642-561de0b5e24a", 00:09:06.877 "assigned_rate_limits": { 00:09:06.877 "rw_ios_per_sec": 0, 00:09:06.877 "rw_mbytes_per_sec": 0, 00:09:06.877 "r_mbytes_per_sec": 0, 00:09:06.877 "w_mbytes_per_sec": 0 00:09:06.877 }, 00:09:06.877 "claimed": false, 00:09:06.877 "zoned": false, 00:09:06.877 "supported_io_types": { 00:09:06.877 "read": true, 00:09:06.877 "write": true, 00:09:06.877 "unmap": true, 00:09:06.877 "flush": true, 00:09:06.877 "reset": true, 00:09:06.877 "nvme_admin": false, 00:09:06.877 "nvme_io": false, 00:09:06.877 "nvme_io_md": false, 00:09:06.877 "write_zeroes": true, 00:09:06.877 "zcopy": true, 00:09:06.877 "get_zone_info": false, 00:09:06.877 "zone_management": false, 00:09:06.877 "zone_append": false, 00:09:06.877 "compare": false, 00:09:06.877 "compare_and_write": false, 00:09:06.877 "abort": true, 00:09:06.877 "seek_hole": false, 00:09:06.877 "seek_data": false, 00:09:06.877 "copy": true, 00:09:06.877 "nvme_iov_md": false 00:09:06.877 }, 00:09:06.877 "memory_domains": [ 00:09:06.877 { 00:09:06.877 "dma_device_id": "system", 00:09:06.877 "dma_device_type": 1 00:09:06.877 }, 00:09:06.877 { 00:09:06.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:06.877 "dma_device_type": 2 00:09:06.877 } 00:09:06.877 ], 00:09:06.877 "driver_specific": {} 00:09:06.877 } 00:09:06.877 ] 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.877 [2024-11-27 21:41:29.957500] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:06.877 [2024-11-27 21:41:29.957579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:06.877 [2024-11-27 21:41:29.957637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:06.877 [2024-11-27 21:41:29.959442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:06.877 [2024-11-27 21:41:29.959529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:06.877 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:06.878 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.878 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:06.878 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.878 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:06.878 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.878 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.878 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.878 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.878 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.878 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.878 21:41:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:06.878 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.878 21:41:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.137 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.137 "name": "Existed_Raid", 00:09:07.137 "uuid": "b1ba2dd9-6733-4c88-921b-4766e4418e23", 00:09:07.137 "strip_size_kb": 64, 00:09:07.137 "state": "configuring", 00:09:07.137 "raid_level": "raid0", 00:09:07.137 "superblock": true, 00:09:07.137 "num_base_bdevs": 4, 00:09:07.137 "num_base_bdevs_discovered": 3, 00:09:07.137 "num_base_bdevs_operational": 4, 00:09:07.137 "base_bdevs_list": [ 00:09:07.137 { 00:09:07.137 "name": "BaseBdev1", 00:09:07.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.137 "is_configured": false, 00:09:07.137 "data_offset": 0, 00:09:07.137 "data_size": 0 00:09:07.137 }, 00:09:07.137 { 00:09:07.137 "name": "BaseBdev2", 00:09:07.137 "uuid": "7b230e9b-2bad-4a8a-b2da-60c6f1128961", 00:09:07.137 "is_configured": true, 00:09:07.138 "data_offset": 2048, 00:09:07.138 "data_size": 63488 00:09:07.138 }, 00:09:07.138 { 00:09:07.138 "name": "BaseBdev3", 00:09:07.138 "uuid": "45db62a3-4dff-4aad-a50c-b7d3b140ae98", 00:09:07.138 "is_configured": true, 00:09:07.138 "data_offset": 2048, 00:09:07.138 "data_size": 63488 00:09:07.138 }, 00:09:07.138 { 00:09:07.138 "name": "BaseBdev4", 00:09:07.138 "uuid": "d05c814a-b099-42ac-9642-561de0b5e24a", 00:09:07.138 "is_configured": true, 00:09:07.138 "data_offset": 2048, 00:09:07.138 "data_size": 63488 00:09:07.138 } 00:09:07.138 ] 00:09:07.138 }' 00:09:07.138 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.138 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.397 [2024-11-27 21:41:30.372799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.397 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.397 "name": "Existed_Raid", 00:09:07.397 "uuid": "b1ba2dd9-6733-4c88-921b-4766e4418e23", 00:09:07.397 "strip_size_kb": 64, 00:09:07.397 "state": "configuring", 00:09:07.397 "raid_level": "raid0", 00:09:07.397 "superblock": true, 00:09:07.397 "num_base_bdevs": 4, 00:09:07.397 "num_base_bdevs_discovered": 2, 00:09:07.397 "num_base_bdevs_operational": 4, 00:09:07.397 "base_bdevs_list": [ 00:09:07.397 { 00:09:07.397 "name": "BaseBdev1", 00:09:07.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.398 "is_configured": false, 00:09:07.398 "data_offset": 0, 00:09:07.398 "data_size": 0 00:09:07.398 }, 00:09:07.398 { 00:09:07.398 "name": null, 00:09:07.398 "uuid": "7b230e9b-2bad-4a8a-b2da-60c6f1128961", 00:09:07.398 "is_configured": false, 00:09:07.398 "data_offset": 0, 00:09:07.398 "data_size": 63488 00:09:07.398 }, 00:09:07.398 { 00:09:07.398 "name": "BaseBdev3", 00:09:07.398 "uuid": "45db62a3-4dff-4aad-a50c-b7d3b140ae98", 00:09:07.398 "is_configured": true, 00:09:07.398 "data_offset": 2048, 00:09:07.398 "data_size": 63488 00:09:07.398 }, 00:09:07.398 { 00:09:07.398 "name": "BaseBdev4", 00:09:07.398 "uuid": "d05c814a-b099-42ac-9642-561de0b5e24a", 00:09:07.398 "is_configured": true, 00:09:07.398 "data_offset": 2048, 00:09:07.398 "data_size": 63488 00:09:07.398 } 00:09:07.398 ] 00:09:07.398 }' 00:09:07.398 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.398 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.969 [2024-11-27 21:41:30.870832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.969 BaseBdev1 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.969 [ 00:09:07.969 { 00:09:07.969 "name": "BaseBdev1", 00:09:07.969 "aliases": [ 00:09:07.969 "04d563ab-50b5-4710-b1a9-4e1374760c6e" 00:09:07.969 ], 00:09:07.969 "product_name": "Malloc disk", 00:09:07.969 "block_size": 512, 00:09:07.969 "num_blocks": 65536, 00:09:07.969 "uuid": "04d563ab-50b5-4710-b1a9-4e1374760c6e", 00:09:07.969 "assigned_rate_limits": { 00:09:07.969 "rw_ios_per_sec": 0, 00:09:07.969 "rw_mbytes_per_sec": 0, 00:09:07.969 "r_mbytes_per_sec": 0, 00:09:07.969 "w_mbytes_per_sec": 0 00:09:07.969 }, 00:09:07.969 "claimed": true, 00:09:07.969 "claim_type": "exclusive_write", 00:09:07.969 "zoned": false, 00:09:07.969 "supported_io_types": { 00:09:07.969 "read": true, 00:09:07.969 "write": true, 00:09:07.969 "unmap": true, 00:09:07.969 "flush": true, 00:09:07.969 "reset": true, 00:09:07.969 "nvme_admin": false, 00:09:07.969 "nvme_io": false, 00:09:07.969 "nvme_io_md": false, 00:09:07.969 "write_zeroes": true, 00:09:07.969 "zcopy": true, 00:09:07.969 "get_zone_info": false, 00:09:07.969 "zone_management": false, 00:09:07.969 "zone_append": false, 00:09:07.969 "compare": false, 00:09:07.969 "compare_and_write": false, 00:09:07.969 "abort": true, 00:09:07.969 "seek_hole": false, 00:09:07.969 "seek_data": false, 00:09:07.969 "copy": true, 00:09:07.969 "nvme_iov_md": false 00:09:07.969 }, 00:09:07.969 "memory_domains": [ 00:09:07.969 { 00:09:07.969 "dma_device_id": "system", 00:09:07.969 "dma_device_type": 1 00:09:07.969 }, 00:09:07.969 { 00:09:07.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.969 "dma_device_type": 2 00:09:07.969 } 00:09:07.969 ], 00:09:07.969 "driver_specific": {} 00:09:07.969 } 00:09:07.969 ] 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.969 "name": "Existed_Raid", 00:09:07.969 "uuid": "b1ba2dd9-6733-4c88-921b-4766e4418e23", 00:09:07.969 "strip_size_kb": 64, 00:09:07.969 "state": "configuring", 00:09:07.969 "raid_level": "raid0", 00:09:07.969 "superblock": true, 00:09:07.969 "num_base_bdevs": 4, 00:09:07.969 "num_base_bdevs_discovered": 3, 00:09:07.969 "num_base_bdevs_operational": 4, 00:09:07.969 "base_bdevs_list": [ 00:09:07.969 { 00:09:07.969 "name": "BaseBdev1", 00:09:07.969 "uuid": "04d563ab-50b5-4710-b1a9-4e1374760c6e", 00:09:07.969 "is_configured": true, 00:09:07.969 "data_offset": 2048, 00:09:07.969 "data_size": 63488 00:09:07.969 }, 00:09:07.969 { 00:09:07.969 "name": null, 00:09:07.969 "uuid": "7b230e9b-2bad-4a8a-b2da-60c6f1128961", 00:09:07.969 "is_configured": false, 00:09:07.969 "data_offset": 0, 00:09:07.969 "data_size": 63488 00:09:07.969 }, 00:09:07.969 { 00:09:07.969 "name": "BaseBdev3", 00:09:07.969 "uuid": "45db62a3-4dff-4aad-a50c-b7d3b140ae98", 00:09:07.969 "is_configured": true, 00:09:07.969 "data_offset": 2048, 00:09:07.969 "data_size": 63488 00:09:07.969 }, 00:09:07.969 { 00:09:07.969 "name": "BaseBdev4", 00:09:07.969 "uuid": "d05c814a-b099-42ac-9642-561de0b5e24a", 00:09:07.969 "is_configured": true, 00:09:07.969 "data_offset": 2048, 00:09:07.969 "data_size": 63488 00:09:07.969 } 00:09:07.969 ] 00:09:07.969 }' 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.969 21:41:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.539 [2024-11-27 21:41:31.429952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.539 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.540 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:08.540 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.540 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.540 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.540 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.540 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.540 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.540 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.540 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.540 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.540 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.540 "name": "Existed_Raid", 00:09:08.540 "uuid": "b1ba2dd9-6733-4c88-921b-4766e4418e23", 00:09:08.540 "strip_size_kb": 64, 00:09:08.540 "state": "configuring", 00:09:08.540 "raid_level": "raid0", 00:09:08.540 "superblock": true, 00:09:08.540 "num_base_bdevs": 4, 00:09:08.540 "num_base_bdevs_discovered": 2, 00:09:08.540 "num_base_bdevs_operational": 4, 00:09:08.540 "base_bdevs_list": [ 00:09:08.540 { 00:09:08.540 "name": "BaseBdev1", 00:09:08.540 "uuid": "04d563ab-50b5-4710-b1a9-4e1374760c6e", 00:09:08.540 "is_configured": true, 00:09:08.540 "data_offset": 2048, 00:09:08.540 "data_size": 63488 00:09:08.540 }, 00:09:08.540 { 00:09:08.540 "name": null, 00:09:08.540 "uuid": "7b230e9b-2bad-4a8a-b2da-60c6f1128961", 00:09:08.540 "is_configured": false, 00:09:08.540 "data_offset": 0, 00:09:08.540 "data_size": 63488 00:09:08.540 }, 00:09:08.540 { 00:09:08.540 "name": null, 00:09:08.540 "uuid": "45db62a3-4dff-4aad-a50c-b7d3b140ae98", 00:09:08.540 "is_configured": false, 00:09:08.540 "data_offset": 0, 00:09:08.540 "data_size": 63488 00:09:08.540 }, 00:09:08.540 { 00:09:08.540 "name": "BaseBdev4", 00:09:08.540 "uuid": "d05c814a-b099-42ac-9642-561de0b5e24a", 00:09:08.540 "is_configured": true, 00:09:08.540 "data_offset": 2048, 00:09:08.540 "data_size": 63488 00:09:08.540 } 00:09:08.540 ] 00:09:08.540 }' 00:09:08.540 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.540 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.799 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.799 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.799 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.799 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:08.799 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.059 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:09.059 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:09.059 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.059 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.059 [2024-11-27 21:41:31.925132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:09.059 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.059 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.060 "name": "Existed_Raid", 00:09:09.060 "uuid": "b1ba2dd9-6733-4c88-921b-4766e4418e23", 00:09:09.060 "strip_size_kb": 64, 00:09:09.060 "state": "configuring", 00:09:09.060 "raid_level": "raid0", 00:09:09.060 "superblock": true, 00:09:09.060 "num_base_bdevs": 4, 00:09:09.060 "num_base_bdevs_discovered": 3, 00:09:09.060 "num_base_bdevs_operational": 4, 00:09:09.060 "base_bdevs_list": [ 00:09:09.060 { 00:09:09.060 "name": "BaseBdev1", 00:09:09.060 "uuid": "04d563ab-50b5-4710-b1a9-4e1374760c6e", 00:09:09.060 "is_configured": true, 00:09:09.060 "data_offset": 2048, 00:09:09.060 "data_size": 63488 00:09:09.060 }, 00:09:09.060 { 00:09:09.060 "name": null, 00:09:09.060 "uuid": "7b230e9b-2bad-4a8a-b2da-60c6f1128961", 00:09:09.060 "is_configured": false, 00:09:09.060 "data_offset": 0, 00:09:09.060 "data_size": 63488 00:09:09.060 }, 00:09:09.060 { 00:09:09.060 "name": "BaseBdev3", 00:09:09.060 "uuid": "45db62a3-4dff-4aad-a50c-b7d3b140ae98", 00:09:09.060 "is_configured": true, 00:09:09.060 "data_offset": 2048, 00:09:09.060 "data_size": 63488 00:09:09.060 }, 00:09:09.060 { 00:09:09.060 "name": "BaseBdev4", 00:09:09.060 "uuid": "d05c814a-b099-42ac-9642-561de0b5e24a", 00:09:09.060 "is_configured": true, 00:09:09.060 "data_offset": 2048, 00:09:09.060 "data_size": 63488 00:09:09.060 } 00:09:09.060 ] 00:09:09.060 }' 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.060 21:41:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.320 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:09.320 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.320 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.320 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.320 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.320 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:09.320 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.320 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.320 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.320 [2024-11-27 21:41:32.440291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.579 "name": "Existed_Raid", 00:09:09.579 "uuid": "b1ba2dd9-6733-4c88-921b-4766e4418e23", 00:09:09.579 "strip_size_kb": 64, 00:09:09.579 "state": "configuring", 00:09:09.579 "raid_level": "raid0", 00:09:09.579 "superblock": true, 00:09:09.579 "num_base_bdevs": 4, 00:09:09.579 "num_base_bdevs_discovered": 2, 00:09:09.579 "num_base_bdevs_operational": 4, 00:09:09.579 "base_bdevs_list": [ 00:09:09.579 { 00:09:09.579 "name": null, 00:09:09.579 "uuid": "04d563ab-50b5-4710-b1a9-4e1374760c6e", 00:09:09.579 "is_configured": false, 00:09:09.579 "data_offset": 0, 00:09:09.579 "data_size": 63488 00:09:09.579 }, 00:09:09.579 { 00:09:09.579 "name": null, 00:09:09.579 "uuid": "7b230e9b-2bad-4a8a-b2da-60c6f1128961", 00:09:09.579 "is_configured": false, 00:09:09.579 "data_offset": 0, 00:09:09.579 "data_size": 63488 00:09:09.579 }, 00:09:09.579 { 00:09:09.579 "name": "BaseBdev3", 00:09:09.579 "uuid": "45db62a3-4dff-4aad-a50c-b7d3b140ae98", 00:09:09.579 "is_configured": true, 00:09:09.579 "data_offset": 2048, 00:09:09.579 "data_size": 63488 00:09:09.579 }, 00:09:09.579 { 00:09:09.579 "name": "BaseBdev4", 00:09:09.579 "uuid": "d05c814a-b099-42ac-9642-561de0b5e24a", 00:09:09.579 "is_configured": true, 00:09:09.579 "data_offset": 2048, 00:09:09.579 "data_size": 63488 00:09:09.579 } 00:09:09.579 ] 00:09:09.579 }' 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.579 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.839 [2024-11-27 21:41:32.937907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.839 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.099 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.099 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.099 "name": "Existed_Raid", 00:09:10.099 "uuid": "b1ba2dd9-6733-4c88-921b-4766e4418e23", 00:09:10.099 "strip_size_kb": 64, 00:09:10.099 "state": "configuring", 00:09:10.099 "raid_level": "raid0", 00:09:10.099 "superblock": true, 00:09:10.099 "num_base_bdevs": 4, 00:09:10.099 "num_base_bdevs_discovered": 3, 00:09:10.099 "num_base_bdevs_operational": 4, 00:09:10.099 "base_bdevs_list": [ 00:09:10.099 { 00:09:10.099 "name": null, 00:09:10.099 "uuid": "04d563ab-50b5-4710-b1a9-4e1374760c6e", 00:09:10.099 "is_configured": false, 00:09:10.099 "data_offset": 0, 00:09:10.099 "data_size": 63488 00:09:10.099 }, 00:09:10.099 { 00:09:10.099 "name": "BaseBdev2", 00:09:10.099 "uuid": "7b230e9b-2bad-4a8a-b2da-60c6f1128961", 00:09:10.099 "is_configured": true, 00:09:10.099 "data_offset": 2048, 00:09:10.099 "data_size": 63488 00:09:10.099 }, 00:09:10.099 { 00:09:10.099 "name": "BaseBdev3", 00:09:10.099 "uuid": "45db62a3-4dff-4aad-a50c-b7d3b140ae98", 00:09:10.099 "is_configured": true, 00:09:10.099 "data_offset": 2048, 00:09:10.099 "data_size": 63488 00:09:10.099 }, 00:09:10.099 { 00:09:10.099 "name": "BaseBdev4", 00:09:10.099 "uuid": "d05c814a-b099-42ac-9642-561de0b5e24a", 00:09:10.099 "is_configured": true, 00:09:10.099 "data_offset": 2048, 00:09:10.099 "data_size": 63488 00:09:10.099 } 00:09:10.099 ] 00:09:10.099 }' 00:09:10.099 21:41:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.099 21:41:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 04d563ab-50b5-4710-b1a9-4e1374760c6e 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.358 [2024-11-27 21:41:33.459999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:10.358 [2024-11-27 21:41:33.460186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:10.358 [2024-11-27 21:41:33.460200] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:10.358 NewBaseBdev 00:09:10.358 [2024-11-27 21:41:33.460449] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:10.358 [2024-11-27 21:41:33.460560] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:10.358 [2024-11-27 21:41:33.460570] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:10.358 [2024-11-27 21:41:33.460665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:10.358 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:10.359 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.359 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:10.359 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.359 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.359 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.359 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.359 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.359 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.359 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:10.359 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.359 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.618 [ 00:09:10.618 { 00:09:10.618 "name": "NewBaseBdev", 00:09:10.618 "aliases": [ 00:09:10.618 "04d563ab-50b5-4710-b1a9-4e1374760c6e" 00:09:10.618 ], 00:09:10.618 "product_name": "Malloc disk", 00:09:10.618 "block_size": 512, 00:09:10.618 "num_blocks": 65536, 00:09:10.618 "uuid": "04d563ab-50b5-4710-b1a9-4e1374760c6e", 00:09:10.618 "assigned_rate_limits": { 00:09:10.618 "rw_ios_per_sec": 0, 00:09:10.618 "rw_mbytes_per_sec": 0, 00:09:10.618 "r_mbytes_per_sec": 0, 00:09:10.618 "w_mbytes_per_sec": 0 00:09:10.618 }, 00:09:10.618 "claimed": true, 00:09:10.618 "claim_type": "exclusive_write", 00:09:10.618 "zoned": false, 00:09:10.618 "supported_io_types": { 00:09:10.618 "read": true, 00:09:10.618 "write": true, 00:09:10.618 "unmap": true, 00:09:10.618 "flush": true, 00:09:10.618 "reset": true, 00:09:10.618 "nvme_admin": false, 00:09:10.618 "nvme_io": false, 00:09:10.618 "nvme_io_md": false, 00:09:10.618 "write_zeroes": true, 00:09:10.618 "zcopy": true, 00:09:10.618 "get_zone_info": false, 00:09:10.618 "zone_management": false, 00:09:10.618 "zone_append": false, 00:09:10.618 "compare": false, 00:09:10.618 "compare_and_write": false, 00:09:10.618 "abort": true, 00:09:10.618 "seek_hole": false, 00:09:10.618 "seek_data": false, 00:09:10.618 "copy": true, 00:09:10.618 "nvme_iov_md": false 00:09:10.618 }, 00:09:10.618 "memory_domains": [ 00:09:10.618 { 00:09:10.618 "dma_device_id": "system", 00:09:10.618 "dma_device_type": 1 00:09:10.618 }, 00:09:10.618 { 00:09:10.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.618 "dma_device_type": 2 00:09:10.618 } 00:09:10.618 ], 00:09:10.618 "driver_specific": {} 00:09:10.618 } 00:09:10.618 ] 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.618 "name": "Existed_Raid", 00:09:10.618 "uuid": "b1ba2dd9-6733-4c88-921b-4766e4418e23", 00:09:10.618 "strip_size_kb": 64, 00:09:10.618 "state": "online", 00:09:10.618 "raid_level": "raid0", 00:09:10.618 "superblock": true, 00:09:10.618 "num_base_bdevs": 4, 00:09:10.618 "num_base_bdevs_discovered": 4, 00:09:10.618 "num_base_bdevs_operational": 4, 00:09:10.618 "base_bdevs_list": [ 00:09:10.618 { 00:09:10.618 "name": "NewBaseBdev", 00:09:10.618 "uuid": "04d563ab-50b5-4710-b1a9-4e1374760c6e", 00:09:10.618 "is_configured": true, 00:09:10.618 "data_offset": 2048, 00:09:10.618 "data_size": 63488 00:09:10.618 }, 00:09:10.618 { 00:09:10.618 "name": "BaseBdev2", 00:09:10.618 "uuid": "7b230e9b-2bad-4a8a-b2da-60c6f1128961", 00:09:10.618 "is_configured": true, 00:09:10.618 "data_offset": 2048, 00:09:10.618 "data_size": 63488 00:09:10.618 }, 00:09:10.618 { 00:09:10.618 "name": "BaseBdev3", 00:09:10.618 "uuid": "45db62a3-4dff-4aad-a50c-b7d3b140ae98", 00:09:10.618 "is_configured": true, 00:09:10.618 "data_offset": 2048, 00:09:10.618 "data_size": 63488 00:09:10.618 }, 00:09:10.618 { 00:09:10.618 "name": "BaseBdev4", 00:09:10.618 "uuid": "d05c814a-b099-42ac-9642-561de0b5e24a", 00:09:10.618 "is_configured": true, 00:09:10.618 "data_offset": 2048, 00:09:10.618 "data_size": 63488 00:09:10.618 } 00:09:10.618 ] 00:09:10.618 }' 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.618 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.904 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:10.904 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:10.904 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:10.904 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:10.904 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:10.904 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:10.904 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:10.904 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:10.905 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.905 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.905 [2024-11-27 21:41:33.943541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.905 21:41:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.905 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.905 "name": "Existed_Raid", 00:09:10.905 "aliases": [ 00:09:10.905 "b1ba2dd9-6733-4c88-921b-4766e4418e23" 00:09:10.905 ], 00:09:10.905 "product_name": "Raid Volume", 00:09:10.905 "block_size": 512, 00:09:10.905 "num_blocks": 253952, 00:09:10.905 "uuid": "b1ba2dd9-6733-4c88-921b-4766e4418e23", 00:09:10.905 "assigned_rate_limits": { 00:09:10.905 "rw_ios_per_sec": 0, 00:09:10.905 "rw_mbytes_per_sec": 0, 00:09:10.905 "r_mbytes_per_sec": 0, 00:09:10.905 "w_mbytes_per_sec": 0 00:09:10.905 }, 00:09:10.905 "claimed": false, 00:09:10.905 "zoned": false, 00:09:10.905 "supported_io_types": { 00:09:10.905 "read": true, 00:09:10.905 "write": true, 00:09:10.905 "unmap": true, 00:09:10.905 "flush": true, 00:09:10.905 "reset": true, 00:09:10.905 "nvme_admin": false, 00:09:10.905 "nvme_io": false, 00:09:10.905 "nvme_io_md": false, 00:09:10.905 "write_zeroes": true, 00:09:10.905 "zcopy": false, 00:09:10.905 "get_zone_info": false, 00:09:10.905 "zone_management": false, 00:09:10.905 "zone_append": false, 00:09:10.905 "compare": false, 00:09:10.905 "compare_and_write": false, 00:09:10.905 "abort": false, 00:09:10.905 "seek_hole": false, 00:09:10.905 "seek_data": false, 00:09:10.905 "copy": false, 00:09:10.905 "nvme_iov_md": false 00:09:10.905 }, 00:09:10.905 "memory_domains": [ 00:09:10.905 { 00:09:10.905 "dma_device_id": "system", 00:09:10.905 "dma_device_type": 1 00:09:10.905 }, 00:09:10.905 { 00:09:10.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.905 "dma_device_type": 2 00:09:10.905 }, 00:09:10.905 { 00:09:10.905 "dma_device_id": "system", 00:09:10.905 "dma_device_type": 1 00:09:10.905 }, 00:09:10.905 { 00:09:10.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.905 "dma_device_type": 2 00:09:10.905 }, 00:09:10.905 { 00:09:10.905 "dma_device_id": "system", 00:09:10.905 "dma_device_type": 1 00:09:10.905 }, 00:09:10.905 { 00:09:10.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.905 "dma_device_type": 2 00:09:10.905 }, 00:09:10.905 { 00:09:10.905 "dma_device_id": "system", 00:09:10.905 "dma_device_type": 1 00:09:10.905 }, 00:09:10.905 { 00:09:10.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.905 "dma_device_type": 2 00:09:10.905 } 00:09:10.905 ], 00:09:10.905 "driver_specific": { 00:09:10.905 "raid": { 00:09:10.905 "uuid": "b1ba2dd9-6733-4c88-921b-4766e4418e23", 00:09:10.905 "strip_size_kb": 64, 00:09:10.905 "state": "online", 00:09:10.905 "raid_level": "raid0", 00:09:10.905 "superblock": true, 00:09:10.905 "num_base_bdevs": 4, 00:09:10.905 "num_base_bdevs_discovered": 4, 00:09:10.905 "num_base_bdevs_operational": 4, 00:09:10.905 "base_bdevs_list": [ 00:09:10.905 { 00:09:10.905 "name": "NewBaseBdev", 00:09:10.905 "uuid": "04d563ab-50b5-4710-b1a9-4e1374760c6e", 00:09:10.905 "is_configured": true, 00:09:10.905 "data_offset": 2048, 00:09:10.905 "data_size": 63488 00:09:10.905 }, 00:09:10.905 { 00:09:10.905 "name": "BaseBdev2", 00:09:10.905 "uuid": "7b230e9b-2bad-4a8a-b2da-60c6f1128961", 00:09:10.905 "is_configured": true, 00:09:10.905 "data_offset": 2048, 00:09:10.905 "data_size": 63488 00:09:10.905 }, 00:09:10.905 { 00:09:10.905 "name": "BaseBdev3", 00:09:10.905 "uuid": "45db62a3-4dff-4aad-a50c-b7d3b140ae98", 00:09:10.905 "is_configured": true, 00:09:10.905 "data_offset": 2048, 00:09:10.905 "data_size": 63488 00:09:10.905 }, 00:09:10.905 { 00:09:10.905 "name": "BaseBdev4", 00:09:10.905 "uuid": "d05c814a-b099-42ac-9642-561de0b5e24a", 00:09:10.905 "is_configured": true, 00:09:10.905 "data_offset": 2048, 00:09:10.905 "data_size": 63488 00:09:10.905 } 00:09:10.905 ] 00:09:10.905 } 00:09:10.905 } 00:09:10.905 }' 00:09:10.905 21:41:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:10.905 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:10.905 BaseBdev2 00:09:10.905 BaseBdev3 00:09:10.905 BaseBdev4' 00:09:10.905 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.173 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.174 [2024-11-27 21:41:34.270683] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:11.174 [2024-11-27 21:41:34.270750] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.174 [2024-11-27 21:41:34.270842] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.174 [2024-11-27 21:41:34.270955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.174 [2024-11-27 21:41:34.271000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80729 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80729 ']' 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80729 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.174 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80729 00:09:11.434 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.434 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.434 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80729' 00:09:11.434 killing process with pid 80729 00:09:11.434 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80729 00:09:11.434 [2024-11-27 21:41:34.320998] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.434 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80729 00:09:11.434 [2024-11-27 21:41:34.361016] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.694 21:41:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:11.694 00:09:11.694 real 0m9.570s 00:09:11.694 user 0m16.462s 00:09:11.694 sys 0m1.940s 00:09:11.694 ************************************ 00:09:11.694 END TEST raid_state_function_test_sb 00:09:11.694 ************************************ 00:09:11.694 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.694 21:41:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.694 21:41:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:09:11.694 21:41:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:11.694 21:41:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.694 21:41:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.694 ************************************ 00:09:11.694 START TEST raid_superblock_test 00:09:11.694 ************************************ 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81377 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81377 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81377 ']' 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.694 21:41:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.694 [2024-11-27 21:41:34.730868] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:11.694 [2024-11-27 21:41:34.731070] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81377 ] 00:09:11.954 [2024-11-27 21:41:34.885751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.954 [2024-11-27 21:41:34.910023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.954 [2024-11-27 21:41:34.951696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:11.954 [2024-11-27 21:41:34.951741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.522 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.522 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:12.522 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.523 malloc1 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.523 [2024-11-27 21:41:35.570525] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:12.523 [2024-11-27 21:41:35.570627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.523 [2024-11-27 21:41:35.570685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:12.523 [2024-11-27 21:41:35.570727] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.523 [2024-11-27 21:41:35.572865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.523 [2024-11-27 21:41:35.572935] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:12.523 pt1 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.523 malloc2 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.523 [2024-11-27 21:41:35.602916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:12.523 [2024-11-27 21:41:35.603002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.523 [2024-11-27 21:41:35.603041] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:12.523 [2024-11-27 21:41:35.603051] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.523 [2024-11-27 21:41:35.605080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.523 [2024-11-27 21:41:35.605114] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:12.523 pt2 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.523 malloc3 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.523 [2024-11-27 21:41:35.631237] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:12.523 [2024-11-27 21:41:35.631339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.523 [2024-11-27 21:41:35.631374] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:12.523 [2024-11-27 21:41:35.631403] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.523 [2024-11-27 21:41:35.633484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.523 [2024-11-27 21:41:35.633554] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:12.523 pt3 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.523 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.784 malloc4 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.784 [2024-11-27 21:41:35.674131] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:12.784 [2024-11-27 21:41:35.674213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.784 [2024-11-27 21:41:35.674260] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:12.784 [2024-11-27 21:41:35.674291] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.784 [2024-11-27 21:41:35.676350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.784 [2024-11-27 21:41:35.676418] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:12.784 pt4 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.784 [2024-11-27 21:41:35.686148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:12.784 [2024-11-27 21:41:35.687989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:12.784 [2024-11-27 21:41:35.688092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:12.784 [2024-11-27 21:41:35.688222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:12.784 [2024-11-27 21:41:35.688427] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:12.784 [2024-11-27 21:41:35.688477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:12.784 [2024-11-27 21:41:35.688780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:12.784 [2024-11-27 21:41:35.688979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:12.784 [2024-11-27 21:41:35.689023] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:12.784 [2024-11-27 21:41:35.689181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.784 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.784 "name": "raid_bdev1", 00:09:12.784 "uuid": "fda6f48b-52d8-4381-a3ec-5657af96e9ba", 00:09:12.784 "strip_size_kb": 64, 00:09:12.785 "state": "online", 00:09:12.785 "raid_level": "raid0", 00:09:12.785 "superblock": true, 00:09:12.785 "num_base_bdevs": 4, 00:09:12.785 "num_base_bdevs_discovered": 4, 00:09:12.785 "num_base_bdevs_operational": 4, 00:09:12.785 "base_bdevs_list": [ 00:09:12.785 { 00:09:12.785 "name": "pt1", 00:09:12.785 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:12.785 "is_configured": true, 00:09:12.785 "data_offset": 2048, 00:09:12.785 "data_size": 63488 00:09:12.785 }, 00:09:12.785 { 00:09:12.785 "name": "pt2", 00:09:12.785 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:12.785 "is_configured": true, 00:09:12.785 "data_offset": 2048, 00:09:12.785 "data_size": 63488 00:09:12.785 }, 00:09:12.785 { 00:09:12.785 "name": "pt3", 00:09:12.785 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:12.785 "is_configured": true, 00:09:12.785 "data_offset": 2048, 00:09:12.785 "data_size": 63488 00:09:12.785 }, 00:09:12.785 { 00:09:12.785 "name": "pt4", 00:09:12.785 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:12.785 "is_configured": true, 00:09:12.785 "data_offset": 2048, 00:09:12.785 "data_size": 63488 00:09:12.785 } 00:09:12.785 ] 00:09:12.785 }' 00:09:12.785 21:41:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.785 21:41:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.044 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:13.044 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:13.044 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:13.044 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:13.044 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:13.044 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:13.044 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:13.044 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:13.044 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.044 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.044 [2024-11-27 21:41:36.141673] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.304 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.304 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:13.304 "name": "raid_bdev1", 00:09:13.304 "aliases": [ 00:09:13.304 "fda6f48b-52d8-4381-a3ec-5657af96e9ba" 00:09:13.304 ], 00:09:13.304 "product_name": "Raid Volume", 00:09:13.304 "block_size": 512, 00:09:13.304 "num_blocks": 253952, 00:09:13.304 "uuid": "fda6f48b-52d8-4381-a3ec-5657af96e9ba", 00:09:13.304 "assigned_rate_limits": { 00:09:13.304 "rw_ios_per_sec": 0, 00:09:13.304 "rw_mbytes_per_sec": 0, 00:09:13.304 "r_mbytes_per_sec": 0, 00:09:13.304 "w_mbytes_per_sec": 0 00:09:13.304 }, 00:09:13.304 "claimed": false, 00:09:13.304 "zoned": false, 00:09:13.304 "supported_io_types": { 00:09:13.304 "read": true, 00:09:13.304 "write": true, 00:09:13.304 "unmap": true, 00:09:13.304 "flush": true, 00:09:13.304 "reset": true, 00:09:13.304 "nvme_admin": false, 00:09:13.304 "nvme_io": false, 00:09:13.304 "nvme_io_md": false, 00:09:13.304 "write_zeroes": true, 00:09:13.304 "zcopy": false, 00:09:13.304 "get_zone_info": false, 00:09:13.304 "zone_management": false, 00:09:13.304 "zone_append": false, 00:09:13.304 "compare": false, 00:09:13.304 "compare_and_write": false, 00:09:13.304 "abort": false, 00:09:13.304 "seek_hole": false, 00:09:13.304 "seek_data": false, 00:09:13.304 "copy": false, 00:09:13.304 "nvme_iov_md": false 00:09:13.304 }, 00:09:13.304 "memory_domains": [ 00:09:13.304 { 00:09:13.304 "dma_device_id": "system", 00:09:13.304 "dma_device_type": 1 00:09:13.304 }, 00:09:13.304 { 00:09:13.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.304 "dma_device_type": 2 00:09:13.304 }, 00:09:13.304 { 00:09:13.304 "dma_device_id": "system", 00:09:13.304 "dma_device_type": 1 00:09:13.304 }, 00:09:13.304 { 00:09:13.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.304 "dma_device_type": 2 00:09:13.304 }, 00:09:13.304 { 00:09:13.304 "dma_device_id": "system", 00:09:13.304 "dma_device_type": 1 00:09:13.304 }, 00:09:13.304 { 00:09:13.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.304 "dma_device_type": 2 00:09:13.304 }, 00:09:13.304 { 00:09:13.304 "dma_device_id": "system", 00:09:13.304 "dma_device_type": 1 00:09:13.304 }, 00:09:13.304 { 00:09:13.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.304 "dma_device_type": 2 00:09:13.304 } 00:09:13.304 ], 00:09:13.304 "driver_specific": { 00:09:13.304 "raid": { 00:09:13.304 "uuid": "fda6f48b-52d8-4381-a3ec-5657af96e9ba", 00:09:13.304 "strip_size_kb": 64, 00:09:13.304 "state": "online", 00:09:13.304 "raid_level": "raid0", 00:09:13.304 "superblock": true, 00:09:13.304 "num_base_bdevs": 4, 00:09:13.304 "num_base_bdevs_discovered": 4, 00:09:13.304 "num_base_bdevs_operational": 4, 00:09:13.304 "base_bdevs_list": [ 00:09:13.304 { 00:09:13.304 "name": "pt1", 00:09:13.304 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:13.304 "is_configured": true, 00:09:13.304 "data_offset": 2048, 00:09:13.304 "data_size": 63488 00:09:13.304 }, 00:09:13.304 { 00:09:13.304 "name": "pt2", 00:09:13.304 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:13.304 "is_configured": true, 00:09:13.304 "data_offset": 2048, 00:09:13.304 "data_size": 63488 00:09:13.304 }, 00:09:13.304 { 00:09:13.304 "name": "pt3", 00:09:13.304 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:13.304 "is_configured": true, 00:09:13.304 "data_offset": 2048, 00:09:13.304 "data_size": 63488 00:09:13.305 }, 00:09:13.305 { 00:09:13.305 "name": "pt4", 00:09:13.305 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:13.305 "is_configured": true, 00:09:13.305 "data_offset": 2048, 00:09:13.305 "data_size": 63488 00:09:13.305 } 00:09:13.305 ] 00:09:13.305 } 00:09:13.305 } 00:09:13.305 }' 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:13.305 pt2 00:09:13.305 pt3 00:09:13.305 pt4' 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.305 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:13.565 [2024-11-27 21:41:36.489150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=fda6f48b-52d8-4381-a3ec-5657af96e9ba 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z fda6f48b-52d8-4381-a3ec-5657af96e9ba ']' 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.565 [2024-11-27 21:41:36.536729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.565 [2024-11-27 21:41:36.536768] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:13.565 [2024-11-27 21:41:36.536869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:13.565 [2024-11-27 21:41:36.536950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:13.565 [2024-11-27 21:41:36.536961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:13.565 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:13.566 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:13.566 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.566 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:13.566 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:13.566 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:13.566 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.566 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.825 [2024-11-27 21:41:36.688519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:13.825 [2024-11-27 21:41:36.690415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:13.825 [2024-11-27 21:41:36.690455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:13.825 [2024-11-27 21:41:36.690483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:13.825 [2024-11-27 21:41:36.690531] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:13.825 [2024-11-27 21:41:36.690604] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:13.825 [2024-11-27 21:41:36.690639] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:13.825 [2024-11-27 21:41:36.690654] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:13.825 [2024-11-27 21:41:36.690668] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:13.825 [2024-11-27 21:41:36.690682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:13.825 request: 00:09:13.825 { 00:09:13.825 "name": "raid_bdev1", 00:09:13.825 "raid_level": "raid0", 00:09:13.825 "base_bdevs": [ 00:09:13.825 "malloc1", 00:09:13.825 "malloc2", 00:09:13.825 "malloc3", 00:09:13.825 "malloc4" 00:09:13.825 ], 00:09:13.825 "strip_size_kb": 64, 00:09:13.825 "superblock": false, 00:09:13.825 "method": "bdev_raid_create", 00:09:13.825 "req_id": 1 00:09:13.825 } 00:09:13.825 Got JSON-RPC error response 00:09:13.825 response: 00:09:13.825 { 00:09:13.825 "code": -17, 00:09:13.825 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:13.825 } 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.825 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.825 [2024-11-27 21:41:36.740378] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:13.825 [2024-11-27 21:41:36.740470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.825 [2024-11-27 21:41:36.740514] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:13.825 [2024-11-27 21:41:36.740543] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.825 [2024-11-27 21:41:36.742751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.826 [2024-11-27 21:41:36.742832] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:13.826 [2024-11-27 21:41:36.742928] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:13.826 [2024-11-27 21:41:36.743018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:13.826 pt1 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.826 "name": "raid_bdev1", 00:09:13.826 "uuid": "fda6f48b-52d8-4381-a3ec-5657af96e9ba", 00:09:13.826 "strip_size_kb": 64, 00:09:13.826 "state": "configuring", 00:09:13.826 "raid_level": "raid0", 00:09:13.826 "superblock": true, 00:09:13.826 "num_base_bdevs": 4, 00:09:13.826 "num_base_bdevs_discovered": 1, 00:09:13.826 "num_base_bdevs_operational": 4, 00:09:13.826 "base_bdevs_list": [ 00:09:13.826 { 00:09:13.826 "name": "pt1", 00:09:13.826 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:13.826 "is_configured": true, 00:09:13.826 "data_offset": 2048, 00:09:13.826 "data_size": 63488 00:09:13.826 }, 00:09:13.826 { 00:09:13.826 "name": null, 00:09:13.826 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:13.826 "is_configured": false, 00:09:13.826 "data_offset": 2048, 00:09:13.826 "data_size": 63488 00:09:13.826 }, 00:09:13.826 { 00:09:13.826 "name": null, 00:09:13.826 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:13.826 "is_configured": false, 00:09:13.826 "data_offset": 2048, 00:09:13.826 "data_size": 63488 00:09:13.826 }, 00:09:13.826 { 00:09:13.826 "name": null, 00:09:13.826 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:13.826 "is_configured": false, 00:09:13.826 "data_offset": 2048, 00:09:13.826 "data_size": 63488 00:09:13.826 } 00:09:13.826 ] 00:09:13.826 }' 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.826 21:41:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.086 [2024-11-27 21:41:37.151730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:14.086 [2024-11-27 21:41:37.151793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.086 [2024-11-27 21:41:37.151826] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:14.086 [2024-11-27 21:41:37.151836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.086 [2024-11-27 21:41:37.152296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.086 [2024-11-27 21:41:37.152320] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:14.086 [2024-11-27 21:41:37.152398] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:14.086 [2024-11-27 21:41:37.152421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:14.086 pt2 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.086 [2024-11-27 21:41:37.159727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.086 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.346 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.346 "name": "raid_bdev1", 00:09:14.346 "uuid": "fda6f48b-52d8-4381-a3ec-5657af96e9ba", 00:09:14.346 "strip_size_kb": 64, 00:09:14.346 "state": "configuring", 00:09:14.346 "raid_level": "raid0", 00:09:14.346 "superblock": true, 00:09:14.346 "num_base_bdevs": 4, 00:09:14.346 "num_base_bdevs_discovered": 1, 00:09:14.346 "num_base_bdevs_operational": 4, 00:09:14.346 "base_bdevs_list": [ 00:09:14.346 { 00:09:14.346 "name": "pt1", 00:09:14.346 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.346 "is_configured": true, 00:09:14.346 "data_offset": 2048, 00:09:14.346 "data_size": 63488 00:09:14.346 }, 00:09:14.346 { 00:09:14.346 "name": null, 00:09:14.346 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.346 "is_configured": false, 00:09:14.346 "data_offset": 0, 00:09:14.346 "data_size": 63488 00:09:14.346 }, 00:09:14.346 { 00:09:14.346 "name": null, 00:09:14.346 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:14.346 "is_configured": false, 00:09:14.346 "data_offset": 2048, 00:09:14.346 "data_size": 63488 00:09:14.346 }, 00:09:14.346 { 00:09:14.346 "name": null, 00:09:14.346 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:14.346 "is_configured": false, 00:09:14.346 "data_offset": 2048, 00:09:14.346 "data_size": 63488 00:09:14.346 } 00:09:14.346 ] 00:09:14.346 }' 00:09:14.346 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.346 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.607 [2024-11-27 21:41:37.650893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:14.607 [2024-11-27 21:41:37.651002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.607 [2024-11-27 21:41:37.651038] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:14.607 [2024-11-27 21:41:37.651067] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.607 [2024-11-27 21:41:37.651516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.607 [2024-11-27 21:41:37.651575] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:14.607 [2024-11-27 21:41:37.651692] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:14.607 [2024-11-27 21:41:37.651760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:14.607 pt2 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.607 [2024-11-27 21:41:37.662825] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:14.607 [2024-11-27 21:41:37.662917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.607 [2024-11-27 21:41:37.662947] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:14.607 [2024-11-27 21:41:37.662975] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.607 [2024-11-27 21:41:37.663352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.607 [2024-11-27 21:41:37.663408] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:14.607 [2024-11-27 21:41:37.663494] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:14.607 [2024-11-27 21:41:37.663542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:14.607 pt3 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.607 [2024-11-27 21:41:37.674809] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:14.607 [2024-11-27 21:41:37.674852] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.607 [2024-11-27 21:41:37.674864] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:14.607 [2024-11-27 21:41:37.674873] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.607 [2024-11-27 21:41:37.675149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.607 [2024-11-27 21:41:37.675166] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:14.607 [2024-11-27 21:41:37.675212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:14.607 [2024-11-27 21:41:37.675229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:14.607 [2024-11-27 21:41:37.675319] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:14.607 [2024-11-27 21:41:37.675329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:14.607 [2024-11-27 21:41:37.675539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:14.607 [2024-11-27 21:41:37.675649] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:14.607 [2024-11-27 21:41:37.675657] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:14.607 [2024-11-27 21:41:37.675746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.607 pt4 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.607 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.867 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.867 "name": "raid_bdev1", 00:09:14.867 "uuid": "fda6f48b-52d8-4381-a3ec-5657af96e9ba", 00:09:14.867 "strip_size_kb": 64, 00:09:14.867 "state": "online", 00:09:14.867 "raid_level": "raid0", 00:09:14.867 "superblock": true, 00:09:14.867 "num_base_bdevs": 4, 00:09:14.867 "num_base_bdevs_discovered": 4, 00:09:14.867 "num_base_bdevs_operational": 4, 00:09:14.867 "base_bdevs_list": [ 00:09:14.867 { 00:09:14.867 "name": "pt1", 00:09:14.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.867 "is_configured": true, 00:09:14.867 "data_offset": 2048, 00:09:14.867 "data_size": 63488 00:09:14.867 }, 00:09:14.867 { 00:09:14.867 "name": "pt2", 00:09:14.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.867 "is_configured": true, 00:09:14.867 "data_offset": 2048, 00:09:14.867 "data_size": 63488 00:09:14.867 }, 00:09:14.867 { 00:09:14.867 "name": "pt3", 00:09:14.867 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:14.867 "is_configured": true, 00:09:14.867 "data_offset": 2048, 00:09:14.867 "data_size": 63488 00:09:14.867 }, 00:09:14.867 { 00:09:14.867 "name": "pt4", 00:09:14.867 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:14.867 "is_configured": true, 00:09:14.867 "data_offset": 2048, 00:09:14.867 "data_size": 63488 00:09:14.867 } 00:09:14.867 ] 00:09:14.867 }' 00:09:14.867 21:41:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.867 21:41:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.127 [2024-11-27 21:41:38.134393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:15.127 "name": "raid_bdev1", 00:09:15.127 "aliases": [ 00:09:15.127 "fda6f48b-52d8-4381-a3ec-5657af96e9ba" 00:09:15.127 ], 00:09:15.127 "product_name": "Raid Volume", 00:09:15.127 "block_size": 512, 00:09:15.127 "num_blocks": 253952, 00:09:15.127 "uuid": "fda6f48b-52d8-4381-a3ec-5657af96e9ba", 00:09:15.127 "assigned_rate_limits": { 00:09:15.127 "rw_ios_per_sec": 0, 00:09:15.127 "rw_mbytes_per_sec": 0, 00:09:15.127 "r_mbytes_per_sec": 0, 00:09:15.127 "w_mbytes_per_sec": 0 00:09:15.127 }, 00:09:15.127 "claimed": false, 00:09:15.127 "zoned": false, 00:09:15.127 "supported_io_types": { 00:09:15.127 "read": true, 00:09:15.127 "write": true, 00:09:15.127 "unmap": true, 00:09:15.127 "flush": true, 00:09:15.127 "reset": true, 00:09:15.127 "nvme_admin": false, 00:09:15.127 "nvme_io": false, 00:09:15.127 "nvme_io_md": false, 00:09:15.127 "write_zeroes": true, 00:09:15.127 "zcopy": false, 00:09:15.127 "get_zone_info": false, 00:09:15.127 "zone_management": false, 00:09:15.127 "zone_append": false, 00:09:15.127 "compare": false, 00:09:15.127 "compare_and_write": false, 00:09:15.127 "abort": false, 00:09:15.127 "seek_hole": false, 00:09:15.127 "seek_data": false, 00:09:15.127 "copy": false, 00:09:15.127 "nvme_iov_md": false 00:09:15.127 }, 00:09:15.127 "memory_domains": [ 00:09:15.127 { 00:09:15.127 "dma_device_id": "system", 00:09:15.127 "dma_device_type": 1 00:09:15.127 }, 00:09:15.127 { 00:09:15.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.127 "dma_device_type": 2 00:09:15.127 }, 00:09:15.127 { 00:09:15.127 "dma_device_id": "system", 00:09:15.127 "dma_device_type": 1 00:09:15.127 }, 00:09:15.127 { 00:09:15.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.127 "dma_device_type": 2 00:09:15.127 }, 00:09:15.127 { 00:09:15.127 "dma_device_id": "system", 00:09:15.127 "dma_device_type": 1 00:09:15.127 }, 00:09:15.127 { 00:09:15.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.127 "dma_device_type": 2 00:09:15.127 }, 00:09:15.127 { 00:09:15.127 "dma_device_id": "system", 00:09:15.127 "dma_device_type": 1 00:09:15.127 }, 00:09:15.127 { 00:09:15.127 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.127 "dma_device_type": 2 00:09:15.127 } 00:09:15.127 ], 00:09:15.127 "driver_specific": { 00:09:15.127 "raid": { 00:09:15.127 "uuid": "fda6f48b-52d8-4381-a3ec-5657af96e9ba", 00:09:15.127 "strip_size_kb": 64, 00:09:15.127 "state": "online", 00:09:15.127 "raid_level": "raid0", 00:09:15.127 "superblock": true, 00:09:15.127 "num_base_bdevs": 4, 00:09:15.127 "num_base_bdevs_discovered": 4, 00:09:15.127 "num_base_bdevs_operational": 4, 00:09:15.127 "base_bdevs_list": [ 00:09:15.127 { 00:09:15.127 "name": "pt1", 00:09:15.127 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.127 "is_configured": true, 00:09:15.127 "data_offset": 2048, 00:09:15.127 "data_size": 63488 00:09:15.127 }, 00:09:15.127 { 00:09:15.127 "name": "pt2", 00:09:15.127 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.127 "is_configured": true, 00:09:15.127 "data_offset": 2048, 00:09:15.127 "data_size": 63488 00:09:15.127 }, 00:09:15.127 { 00:09:15.127 "name": "pt3", 00:09:15.127 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:15.127 "is_configured": true, 00:09:15.127 "data_offset": 2048, 00:09:15.127 "data_size": 63488 00:09:15.127 }, 00:09:15.127 { 00:09:15.127 "name": "pt4", 00:09:15.127 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:15.127 "is_configured": true, 00:09:15.127 "data_offset": 2048, 00:09:15.127 "data_size": 63488 00:09:15.127 } 00:09:15.127 ] 00:09:15.127 } 00:09:15.127 } 00:09:15.127 }' 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:15.127 pt2 00:09:15.127 pt3 00:09:15.127 pt4' 00:09:15.127 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.387 [2024-11-27 21:41:38.477728] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.387 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' fda6f48b-52d8-4381-a3ec-5657af96e9ba '!=' fda6f48b-52d8-4381-a3ec-5657af96e9ba ']' 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81377 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81377 ']' 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81377 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81377 00:09:15.647 killing process with pid 81377 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81377' 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 81377 00:09:15.647 [2024-11-27 21:41:38.558133] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.647 [2024-11-27 21:41:38.558214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.647 [2024-11-27 21:41:38.558282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.647 [2024-11-27 21:41:38.558295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:15.647 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 81377 00:09:15.647 [2024-11-27 21:41:38.601657] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:15.907 21:41:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:15.907 ************************************ 00:09:15.907 END TEST raid_superblock_test 00:09:15.907 ************************************ 00:09:15.907 00:09:15.907 real 0m4.167s 00:09:15.907 user 0m6.648s 00:09:15.907 sys 0m0.883s 00:09:15.907 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.907 21:41:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.907 21:41:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:09:15.907 21:41:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:15.907 21:41:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.907 21:41:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:15.907 ************************************ 00:09:15.907 START TEST raid_read_error_test 00:09:15.907 ************************************ 00:09:15.907 21:41:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:09:15.907 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:15.907 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:15.907 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:15.907 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:15.907 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.umEBT8YtoA 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81625 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81625 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 81625 ']' 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.908 21:41:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.908 [2024-11-27 21:41:38.985618] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:15.908 [2024-11-27 21:41:38.985730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81625 ] 00:09:16.168 [2024-11-27 21:41:39.142226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.168 [2024-11-27 21:41:39.166884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.168 [2024-11-27 21:41:39.208651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.168 [2024-11-27 21:41:39.208688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.739 BaseBdev1_malloc 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.739 true 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.739 [2024-11-27 21:41:39.844300] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:16.739 [2024-11-27 21:41:39.844389] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:16.739 [2024-11-27 21:41:39.844427] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:16.739 [2024-11-27 21:41:39.844455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:16.739 [2024-11-27 21:41:39.846642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:16.739 [2024-11-27 21:41:39.846711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:16.739 BaseBdev1 00:09:16.739 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.740 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:16.740 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:16.740 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.740 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.000 BaseBdev2_malloc 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.000 true 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.000 [2024-11-27 21:41:39.884935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:17.000 [2024-11-27 21:41:39.885017] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.000 [2024-11-27 21:41:39.885038] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:17.000 [2024-11-27 21:41:39.885055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.000 [2024-11-27 21:41:39.887116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.000 [2024-11-27 21:41:39.887152] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:17.000 BaseBdev2 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.000 BaseBdev3_malloc 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.000 true 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.000 [2024-11-27 21:41:39.925369] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:17.000 [2024-11-27 21:41:39.925412] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.000 [2024-11-27 21:41:39.925431] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:17.000 [2024-11-27 21:41:39.925439] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.000 [2024-11-27 21:41:39.927480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.000 [2024-11-27 21:41:39.927515] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:17.000 BaseBdev3 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.000 BaseBdev4_malloc 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.000 true 00:09:17.000 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.001 [2024-11-27 21:41:39.974648] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:17.001 [2024-11-27 21:41:39.974692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.001 [2024-11-27 21:41:39.974728] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:17.001 [2024-11-27 21:41:39.974736] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.001 [2024-11-27 21:41:39.976766] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.001 [2024-11-27 21:41:39.976810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:17.001 BaseBdev4 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.001 [2024-11-27 21:41:39.986663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.001 [2024-11-27 21:41:39.988459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.001 [2024-11-27 21:41:39.988535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.001 [2024-11-27 21:41:39.988583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:17.001 [2024-11-27 21:41:39.988768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:17.001 [2024-11-27 21:41:39.988779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:17.001 [2024-11-27 21:41:39.989036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:09:17.001 [2024-11-27 21:41:39.989163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:17.001 [2024-11-27 21:41:39.989176] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:17.001 [2024-11-27 21:41:39.989296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.001 21:41:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.001 21:41:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.001 21:41:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.001 21:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.001 "name": "raid_bdev1", 00:09:17.001 "uuid": "9d812633-fad9-43c2-99cd-57aeea3ed7df", 00:09:17.001 "strip_size_kb": 64, 00:09:17.001 "state": "online", 00:09:17.001 "raid_level": "raid0", 00:09:17.001 "superblock": true, 00:09:17.001 "num_base_bdevs": 4, 00:09:17.001 "num_base_bdevs_discovered": 4, 00:09:17.001 "num_base_bdevs_operational": 4, 00:09:17.001 "base_bdevs_list": [ 00:09:17.001 { 00:09:17.001 "name": "BaseBdev1", 00:09:17.001 "uuid": "28a4ddd5-1c22-520d-8952-d6385cfb86cd", 00:09:17.001 "is_configured": true, 00:09:17.001 "data_offset": 2048, 00:09:17.001 "data_size": 63488 00:09:17.001 }, 00:09:17.001 { 00:09:17.001 "name": "BaseBdev2", 00:09:17.001 "uuid": "4373ba8f-66f4-580e-bb5a-e776267257ce", 00:09:17.001 "is_configured": true, 00:09:17.001 "data_offset": 2048, 00:09:17.001 "data_size": 63488 00:09:17.001 }, 00:09:17.001 { 00:09:17.001 "name": "BaseBdev3", 00:09:17.001 "uuid": "04be8938-b396-5a7d-b71c-2d95e1c206e4", 00:09:17.001 "is_configured": true, 00:09:17.001 "data_offset": 2048, 00:09:17.001 "data_size": 63488 00:09:17.001 }, 00:09:17.001 { 00:09:17.001 "name": "BaseBdev4", 00:09:17.001 "uuid": "beded0b1-ca6b-5efb-8549-4d0d018dafd4", 00:09:17.001 "is_configured": true, 00:09:17.001 "data_offset": 2048, 00:09:17.001 "data_size": 63488 00:09:17.001 } 00:09:17.001 ] 00:09:17.001 }' 00:09:17.001 21:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.001 21:41:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.569 21:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:17.569 21:41:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:17.569 [2024-11-27 21:41:40.550083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.506 "name": "raid_bdev1", 00:09:18.506 "uuid": "9d812633-fad9-43c2-99cd-57aeea3ed7df", 00:09:18.506 "strip_size_kb": 64, 00:09:18.506 "state": "online", 00:09:18.506 "raid_level": "raid0", 00:09:18.506 "superblock": true, 00:09:18.506 "num_base_bdevs": 4, 00:09:18.506 "num_base_bdevs_discovered": 4, 00:09:18.506 "num_base_bdevs_operational": 4, 00:09:18.506 "base_bdevs_list": [ 00:09:18.506 { 00:09:18.506 "name": "BaseBdev1", 00:09:18.506 "uuid": "28a4ddd5-1c22-520d-8952-d6385cfb86cd", 00:09:18.506 "is_configured": true, 00:09:18.506 "data_offset": 2048, 00:09:18.506 "data_size": 63488 00:09:18.506 }, 00:09:18.506 { 00:09:18.506 "name": "BaseBdev2", 00:09:18.506 "uuid": "4373ba8f-66f4-580e-bb5a-e776267257ce", 00:09:18.506 "is_configured": true, 00:09:18.506 "data_offset": 2048, 00:09:18.506 "data_size": 63488 00:09:18.506 }, 00:09:18.506 { 00:09:18.506 "name": "BaseBdev3", 00:09:18.506 "uuid": "04be8938-b396-5a7d-b71c-2d95e1c206e4", 00:09:18.506 "is_configured": true, 00:09:18.506 "data_offset": 2048, 00:09:18.506 "data_size": 63488 00:09:18.506 }, 00:09:18.506 { 00:09:18.506 "name": "BaseBdev4", 00:09:18.506 "uuid": "beded0b1-ca6b-5efb-8549-4d0d018dafd4", 00:09:18.506 "is_configured": true, 00:09:18.506 "data_offset": 2048, 00:09:18.506 "data_size": 63488 00:09:18.506 } 00:09:18.506 ] 00:09:18.506 }' 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.506 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.775 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:18.775 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.775 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.056 [2024-11-27 21:41:41.893842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.056 [2024-11-27 21:41:41.893927] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.056 [2024-11-27 21:41:41.896928] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.056 [2024-11-27 21:41:41.897026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.056 [2024-11-27 21:41:41.897131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.056 [2024-11-27 21:41:41.897182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:19.056 { 00:09:19.056 "results": [ 00:09:19.056 { 00:09:19.056 "job": "raid_bdev1", 00:09:19.056 "core_mask": "0x1", 00:09:19.056 "workload": "randrw", 00:09:19.056 "percentage": 50, 00:09:19.056 "status": "finished", 00:09:19.056 "queue_depth": 1, 00:09:19.056 "io_size": 131072, 00:09:19.056 "runtime": 1.344714, 00:09:19.056 "iops": 16317.224331716632, 00:09:19.056 "mibps": 2039.653041464579, 00:09:19.056 "io_failed": 1, 00:09:19.056 "io_timeout": 0, 00:09:19.056 "avg_latency_us": 84.64080994287104, 00:09:19.056 "min_latency_us": 25.152838427947597, 00:09:19.056 "max_latency_us": 1366.5257641921398 00:09:19.056 } 00:09:19.056 ], 00:09:19.056 "core_count": 1 00:09:19.056 } 00:09:19.056 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.056 21:41:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81625 00:09:19.056 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 81625 ']' 00:09:19.056 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 81625 00:09:19.056 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:19.056 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.056 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81625 00:09:19.056 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.056 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.056 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81625' 00:09:19.056 killing process with pid 81625 00:09:19.056 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 81625 00:09:19.056 [2024-11-27 21:41:41.943977] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.056 21:41:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 81625 00:09:19.056 [2024-11-27 21:41:41.979196] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.315 21:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.umEBT8YtoA 00:09:19.315 21:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:19.315 21:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:19.315 21:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:19.315 21:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:19.315 21:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:19.315 21:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:19.315 ************************************ 00:09:19.315 END TEST raid_read_error_test 00:09:19.315 ************************************ 00:09:19.315 21:41:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:19.315 00:09:19.315 real 0m3.305s 00:09:19.315 user 0m4.217s 00:09:19.315 sys 0m0.502s 00:09:19.315 21:41:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.315 21:41:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.315 21:41:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:09:19.315 21:41:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:19.315 21:41:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.315 21:41:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:19.315 ************************************ 00:09:19.315 START TEST raid_write_error_test 00:09:19.315 ************************************ 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8phHMXI2Ib 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=81754 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 81754 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 81754 ']' 00:09:19.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.315 21:41:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.315 [2024-11-27 21:41:42.366419] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:19.315 [2024-11-27 21:41:42.366545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81754 ] 00:09:19.573 [2024-11-27 21:41:42.519601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.573 [2024-11-27 21:41:42.544171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.574 [2024-11-27 21:41:42.586409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.574 [2024-11-27 21:41:42.586441] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.141 BaseBdev1_malloc 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.141 true 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.141 [2024-11-27 21:41:43.221282] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:20.141 [2024-11-27 21:41:43.221347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.141 [2024-11-27 21:41:43.221368] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:20.141 [2024-11-27 21:41:43.221376] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.141 [2024-11-27 21:41:43.223446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.141 [2024-11-27 21:41:43.223549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:20.141 BaseBdev1 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.141 BaseBdev2_malloc 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.141 true 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.141 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.400 [2024-11-27 21:41:43.261748] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:20.400 [2024-11-27 21:41:43.261807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.400 [2024-11-27 21:41:43.261826] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:20.400 [2024-11-27 21:41:43.261843] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.400 [2024-11-27 21:41:43.264056] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.400 [2024-11-27 21:41:43.264100] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:20.400 BaseBdev2 00:09:20.400 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.400 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.400 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:20.400 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.400 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.400 BaseBdev3_malloc 00:09:20.400 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.400 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:20.400 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.400 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.400 true 00:09:20.400 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.400 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:20.400 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.401 [2024-11-27 21:41:43.302082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:20.401 [2024-11-27 21:41:43.302123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.401 [2024-11-27 21:41:43.302141] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:20.401 [2024-11-27 21:41:43.302150] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.401 [2024-11-27 21:41:43.304179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.401 [2024-11-27 21:41:43.304265] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:20.401 BaseBdev3 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.401 BaseBdev4_malloc 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.401 true 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.401 [2024-11-27 21:41:43.360781] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:20.401 [2024-11-27 21:41:43.360857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:20.401 [2024-11-27 21:41:43.360886] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:20.401 [2024-11-27 21:41:43.360899] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:20.401 [2024-11-27 21:41:43.363713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:20.401 [2024-11-27 21:41:43.363757] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:20.401 BaseBdev4 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.401 [2024-11-27 21:41:43.372776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.401 [2024-11-27 21:41:43.374643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.401 [2024-11-27 21:41:43.374777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.401 [2024-11-27 21:41:43.374864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:20.401 [2024-11-27 21:41:43.375119] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:20.401 [2024-11-27 21:41:43.375167] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:20.401 [2024-11-27 21:41:43.375484] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:09:20.401 [2024-11-27 21:41:43.375685] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:20.401 [2024-11-27 21:41:43.375733] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:20.401 [2024-11-27 21:41:43.375927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.401 "name": "raid_bdev1", 00:09:20.401 "uuid": "0ca1032c-42cd-4f6c-aaf9-76e2b79b4a9e", 00:09:20.401 "strip_size_kb": 64, 00:09:20.401 "state": "online", 00:09:20.401 "raid_level": "raid0", 00:09:20.401 "superblock": true, 00:09:20.401 "num_base_bdevs": 4, 00:09:20.401 "num_base_bdevs_discovered": 4, 00:09:20.401 "num_base_bdevs_operational": 4, 00:09:20.401 "base_bdevs_list": [ 00:09:20.401 { 00:09:20.401 "name": "BaseBdev1", 00:09:20.401 "uuid": "c83dca98-8326-51a9-831b-7d5f5a9c8d6c", 00:09:20.401 "is_configured": true, 00:09:20.401 "data_offset": 2048, 00:09:20.401 "data_size": 63488 00:09:20.401 }, 00:09:20.401 { 00:09:20.401 "name": "BaseBdev2", 00:09:20.401 "uuid": "14c00d97-9cfa-5408-9a43-2c1b457c719e", 00:09:20.401 "is_configured": true, 00:09:20.401 "data_offset": 2048, 00:09:20.401 "data_size": 63488 00:09:20.401 }, 00:09:20.401 { 00:09:20.401 "name": "BaseBdev3", 00:09:20.401 "uuid": "0a7e536c-17af-5833-9832-334697e12352", 00:09:20.401 "is_configured": true, 00:09:20.401 "data_offset": 2048, 00:09:20.401 "data_size": 63488 00:09:20.401 }, 00:09:20.401 { 00:09:20.401 "name": "BaseBdev4", 00:09:20.401 "uuid": "3eb70653-e0c3-5de4-8dea-80a98aae7855", 00:09:20.401 "is_configured": true, 00:09:20.401 "data_offset": 2048, 00:09:20.401 "data_size": 63488 00:09:20.401 } 00:09:20.401 ] 00:09:20.401 }' 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.401 21:41:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.969 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:20.969 21:41:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:20.969 [2024-11-27 21:41:43.876507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.905 "name": "raid_bdev1", 00:09:21.905 "uuid": "0ca1032c-42cd-4f6c-aaf9-76e2b79b4a9e", 00:09:21.905 "strip_size_kb": 64, 00:09:21.905 "state": "online", 00:09:21.905 "raid_level": "raid0", 00:09:21.905 "superblock": true, 00:09:21.905 "num_base_bdevs": 4, 00:09:21.905 "num_base_bdevs_discovered": 4, 00:09:21.905 "num_base_bdevs_operational": 4, 00:09:21.905 "base_bdevs_list": [ 00:09:21.905 { 00:09:21.905 "name": "BaseBdev1", 00:09:21.905 "uuid": "c83dca98-8326-51a9-831b-7d5f5a9c8d6c", 00:09:21.905 "is_configured": true, 00:09:21.905 "data_offset": 2048, 00:09:21.905 "data_size": 63488 00:09:21.905 }, 00:09:21.905 { 00:09:21.905 "name": "BaseBdev2", 00:09:21.905 "uuid": "14c00d97-9cfa-5408-9a43-2c1b457c719e", 00:09:21.905 "is_configured": true, 00:09:21.905 "data_offset": 2048, 00:09:21.905 "data_size": 63488 00:09:21.905 }, 00:09:21.905 { 00:09:21.905 "name": "BaseBdev3", 00:09:21.905 "uuid": "0a7e536c-17af-5833-9832-334697e12352", 00:09:21.905 "is_configured": true, 00:09:21.905 "data_offset": 2048, 00:09:21.905 "data_size": 63488 00:09:21.905 }, 00:09:21.905 { 00:09:21.905 "name": "BaseBdev4", 00:09:21.905 "uuid": "3eb70653-e0c3-5de4-8dea-80a98aae7855", 00:09:21.905 "is_configured": true, 00:09:21.905 "data_offset": 2048, 00:09:21.905 "data_size": 63488 00:09:21.905 } 00:09:21.905 ] 00:09:21.905 }' 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.905 21:41:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.165 21:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:22.165 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.165 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.165 [2024-11-27 21:41:45.240228] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:22.165 [2024-11-27 21:41:45.240314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.165 [2024-11-27 21:41:45.242906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.165 [2024-11-27 21:41:45.243009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.165 [2024-11-27 21:41:45.243077] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.165 [2024-11-27 21:41:45.243152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:22.165 { 00:09:22.165 "results": [ 00:09:22.165 { 00:09:22.165 "job": "raid_bdev1", 00:09:22.165 "core_mask": "0x1", 00:09:22.165 "workload": "randrw", 00:09:22.165 "percentage": 50, 00:09:22.165 "status": "finished", 00:09:22.165 "queue_depth": 1, 00:09:22.165 "io_size": 131072, 00:09:22.165 "runtime": 1.364612, 00:09:22.165 "iops": 16428.112899490843, 00:09:22.165 "mibps": 2053.5141124363554, 00:09:22.165 "io_failed": 1, 00:09:22.165 "io_timeout": 0, 00:09:22.165 "avg_latency_us": 84.16973391448418, 00:09:22.165 "min_latency_us": 25.041048034934498, 00:09:22.165 "max_latency_us": 1366.5257641921398 00:09:22.165 } 00:09:22.165 ], 00:09:22.165 "core_count": 1 00:09:22.165 } 00:09:22.165 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.165 21:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 81754 00:09:22.165 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 81754 ']' 00:09:22.165 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 81754 00:09:22.165 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:22.165 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:22.165 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81754 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81754' 00:09:22.425 killing process with pid 81754 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 81754 00:09:22.425 [2024-11-27 21:41:45.291134] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 81754 00:09:22.425 [2024-11-27 21:41:45.325955] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8phHMXI2Ib 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:22.425 00:09:22.425 real 0m3.277s 00:09:22.425 user 0m4.112s 00:09:22.425 sys 0m0.525s 00:09:22.425 ************************************ 00:09:22.425 END TEST raid_write_error_test 00:09:22.425 ************************************ 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.425 21:41:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.685 21:41:45 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:22.685 21:41:45 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:09:22.685 21:41:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:22.685 21:41:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.685 21:41:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.685 ************************************ 00:09:22.685 START TEST raid_state_function_test 00:09:22.685 ************************************ 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81887 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81887' 00:09:22.685 Process raid pid: 81887 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81887 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 81887 ']' 00:09:22.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.685 21:41:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.685 [2024-11-27 21:41:45.704920] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:22.685 [2024-11-27 21:41:45.705045] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.944 [2024-11-27 21:41:45.859743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.944 [2024-11-27 21:41:45.884254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.944 [2024-11-27 21:41:45.926386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.944 [2024-11-27 21:41:45.926529] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:23.512 21:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.512 21:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:23.512 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:23.512 21:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.512 21:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.512 [2024-11-27 21:41:46.533547] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.512 [2024-11-27 21:41:46.533599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.512 [2024-11-27 21:41:46.533609] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.512 [2024-11-27 21:41:46.533618] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.512 [2024-11-27 21:41:46.533624] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.512 [2024-11-27 21:41:46.533636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.512 [2024-11-27 21:41:46.533641] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:23.512 [2024-11-27 21:41:46.533649] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.513 "name": "Existed_Raid", 00:09:23.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.513 "strip_size_kb": 64, 00:09:23.513 "state": "configuring", 00:09:23.513 "raid_level": "concat", 00:09:23.513 "superblock": false, 00:09:23.513 "num_base_bdevs": 4, 00:09:23.513 "num_base_bdevs_discovered": 0, 00:09:23.513 "num_base_bdevs_operational": 4, 00:09:23.513 "base_bdevs_list": [ 00:09:23.513 { 00:09:23.513 "name": "BaseBdev1", 00:09:23.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.513 "is_configured": false, 00:09:23.513 "data_offset": 0, 00:09:23.513 "data_size": 0 00:09:23.513 }, 00:09:23.513 { 00:09:23.513 "name": "BaseBdev2", 00:09:23.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.513 "is_configured": false, 00:09:23.513 "data_offset": 0, 00:09:23.513 "data_size": 0 00:09:23.513 }, 00:09:23.513 { 00:09:23.513 "name": "BaseBdev3", 00:09:23.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.513 "is_configured": false, 00:09:23.513 "data_offset": 0, 00:09:23.513 "data_size": 0 00:09:23.513 }, 00:09:23.513 { 00:09:23.513 "name": "BaseBdev4", 00:09:23.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.513 "is_configured": false, 00:09:23.513 "data_offset": 0, 00:09:23.513 "data_size": 0 00:09:23.513 } 00:09:23.513 ] 00:09:23.513 }' 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.513 21:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.080 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:24.080 21:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.080 21:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.080 [2024-11-27 21:41:46.996713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.080 [2024-11-27 21:41:46.996809] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:24.080 21:41:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.080 21:41:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:24.080 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.080 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.080 [2024-11-27 21:41:47.008691] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.080 [2024-11-27 21:41:47.008769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.080 [2024-11-27 21:41:47.008819] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.080 [2024-11-27 21:41:47.008859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.080 [2024-11-27 21:41:47.008891] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.080 [2024-11-27 21:41:47.008940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.080 [2024-11-27 21:41:47.008973] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:24.080 [2024-11-27 21:41:47.009015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:24.080 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.080 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:24.080 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.080 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.080 [2024-11-27 21:41:47.029722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.081 BaseBdev1 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.081 [ 00:09:24.081 { 00:09:24.081 "name": "BaseBdev1", 00:09:24.081 "aliases": [ 00:09:24.081 "8cbde5a3-0b5d-49e1-897c-42017f2c5dbe" 00:09:24.081 ], 00:09:24.081 "product_name": "Malloc disk", 00:09:24.081 "block_size": 512, 00:09:24.081 "num_blocks": 65536, 00:09:24.081 "uuid": "8cbde5a3-0b5d-49e1-897c-42017f2c5dbe", 00:09:24.081 "assigned_rate_limits": { 00:09:24.081 "rw_ios_per_sec": 0, 00:09:24.081 "rw_mbytes_per_sec": 0, 00:09:24.081 "r_mbytes_per_sec": 0, 00:09:24.081 "w_mbytes_per_sec": 0 00:09:24.081 }, 00:09:24.081 "claimed": true, 00:09:24.081 "claim_type": "exclusive_write", 00:09:24.081 "zoned": false, 00:09:24.081 "supported_io_types": { 00:09:24.081 "read": true, 00:09:24.081 "write": true, 00:09:24.081 "unmap": true, 00:09:24.081 "flush": true, 00:09:24.081 "reset": true, 00:09:24.081 "nvme_admin": false, 00:09:24.081 "nvme_io": false, 00:09:24.081 "nvme_io_md": false, 00:09:24.081 "write_zeroes": true, 00:09:24.081 "zcopy": true, 00:09:24.081 "get_zone_info": false, 00:09:24.081 "zone_management": false, 00:09:24.081 "zone_append": false, 00:09:24.081 "compare": false, 00:09:24.081 "compare_and_write": false, 00:09:24.081 "abort": true, 00:09:24.081 "seek_hole": false, 00:09:24.081 "seek_data": false, 00:09:24.081 "copy": true, 00:09:24.081 "nvme_iov_md": false 00:09:24.081 }, 00:09:24.081 "memory_domains": [ 00:09:24.081 { 00:09:24.081 "dma_device_id": "system", 00:09:24.081 "dma_device_type": 1 00:09:24.081 }, 00:09:24.081 { 00:09:24.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.081 "dma_device_type": 2 00:09:24.081 } 00:09:24.081 ], 00:09:24.081 "driver_specific": {} 00:09:24.081 } 00:09:24.081 ] 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.081 "name": "Existed_Raid", 00:09:24.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.081 "strip_size_kb": 64, 00:09:24.081 "state": "configuring", 00:09:24.081 "raid_level": "concat", 00:09:24.081 "superblock": false, 00:09:24.081 "num_base_bdevs": 4, 00:09:24.081 "num_base_bdevs_discovered": 1, 00:09:24.081 "num_base_bdevs_operational": 4, 00:09:24.081 "base_bdevs_list": [ 00:09:24.081 { 00:09:24.081 "name": "BaseBdev1", 00:09:24.081 "uuid": "8cbde5a3-0b5d-49e1-897c-42017f2c5dbe", 00:09:24.081 "is_configured": true, 00:09:24.081 "data_offset": 0, 00:09:24.081 "data_size": 65536 00:09:24.081 }, 00:09:24.081 { 00:09:24.081 "name": "BaseBdev2", 00:09:24.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.081 "is_configured": false, 00:09:24.081 "data_offset": 0, 00:09:24.081 "data_size": 0 00:09:24.081 }, 00:09:24.081 { 00:09:24.081 "name": "BaseBdev3", 00:09:24.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.081 "is_configured": false, 00:09:24.081 "data_offset": 0, 00:09:24.081 "data_size": 0 00:09:24.081 }, 00:09:24.081 { 00:09:24.081 "name": "BaseBdev4", 00:09:24.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.081 "is_configured": false, 00:09:24.081 "data_offset": 0, 00:09:24.081 "data_size": 0 00:09:24.081 } 00:09:24.081 ] 00:09:24.081 }' 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.081 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.651 [2024-11-27 21:41:47.477013] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.651 [2024-11-27 21:41:47.477060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.651 [2024-11-27 21:41:47.489044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.651 [2024-11-27 21:41:47.490969] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.651 [2024-11-27 21:41:47.491015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.651 [2024-11-27 21:41:47.491029] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.651 [2024-11-27 21:41:47.491057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.651 [2024-11-27 21:41:47.491065] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:24.651 [2024-11-27 21:41:47.491073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.651 "name": "Existed_Raid", 00:09:24.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.651 "strip_size_kb": 64, 00:09:24.651 "state": "configuring", 00:09:24.651 "raid_level": "concat", 00:09:24.651 "superblock": false, 00:09:24.651 "num_base_bdevs": 4, 00:09:24.651 "num_base_bdevs_discovered": 1, 00:09:24.651 "num_base_bdevs_operational": 4, 00:09:24.651 "base_bdevs_list": [ 00:09:24.651 { 00:09:24.651 "name": "BaseBdev1", 00:09:24.651 "uuid": "8cbde5a3-0b5d-49e1-897c-42017f2c5dbe", 00:09:24.651 "is_configured": true, 00:09:24.651 "data_offset": 0, 00:09:24.651 "data_size": 65536 00:09:24.651 }, 00:09:24.651 { 00:09:24.651 "name": "BaseBdev2", 00:09:24.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.651 "is_configured": false, 00:09:24.651 "data_offset": 0, 00:09:24.651 "data_size": 0 00:09:24.651 }, 00:09:24.651 { 00:09:24.651 "name": "BaseBdev3", 00:09:24.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.651 "is_configured": false, 00:09:24.651 "data_offset": 0, 00:09:24.651 "data_size": 0 00:09:24.651 }, 00:09:24.651 { 00:09:24.651 "name": "BaseBdev4", 00:09:24.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.651 "is_configured": false, 00:09:24.651 "data_offset": 0, 00:09:24.651 "data_size": 0 00:09:24.651 } 00:09:24.651 ] 00:09:24.651 }' 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.651 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.911 [2024-11-27 21:41:47.947035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.911 BaseBdev2 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.911 [ 00:09:24.911 { 00:09:24.911 "name": "BaseBdev2", 00:09:24.911 "aliases": [ 00:09:24.911 "a4f9e02b-5fda-43f4-a5e8-7552a1faa405" 00:09:24.911 ], 00:09:24.911 "product_name": "Malloc disk", 00:09:24.911 "block_size": 512, 00:09:24.911 "num_blocks": 65536, 00:09:24.911 "uuid": "a4f9e02b-5fda-43f4-a5e8-7552a1faa405", 00:09:24.911 "assigned_rate_limits": { 00:09:24.911 "rw_ios_per_sec": 0, 00:09:24.911 "rw_mbytes_per_sec": 0, 00:09:24.911 "r_mbytes_per_sec": 0, 00:09:24.911 "w_mbytes_per_sec": 0 00:09:24.911 }, 00:09:24.911 "claimed": true, 00:09:24.911 "claim_type": "exclusive_write", 00:09:24.911 "zoned": false, 00:09:24.911 "supported_io_types": { 00:09:24.911 "read": true, 00:09:24.911 "write": true, 00:09:24.911 "unmap": true, 00:09:24.911 "flush": true, 00:09:24.911 "reset": true, 00:09:24.911 "nvme_admin": false, 00:09:24.911 "nvme_io": false, 00:09:24.911 "nvme_io_md": false, 00:09:24.911 "write_zeroes": true, 00:09:24.911 "zcopy": true, 00:09:24.911 "get_zone_info": false, 00:09:24.911 "zone_management": false, 00:09:24.911 "zone_append": false, 00:09:24.911 "compare": false, 00:09:24.911 "compare_and_write": false, 00:09:24.911 "abort": true, 00:09:24.911 "seek_hole": false, 00:09:24.911 "seek_data": false, 00:09:24.911 "copy": true, 00:09:24.911 "nvme_iov_md": false 00:09:24.911 }, 00:09:24.911 "memory_domains": [ 00:09:24.911 { 00:09:24.911 "dma_device_id": "system", 00:09:24.911 "dma_device_type": 1 00:09:24.911 }, 00:09:24.911 { 00:09:24.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.911 "dma_device_type": 2 00:09:24.911 } 00:09:24.911 ], 00:09:24.911 "driver_specific": {} 00:09:24.911 } 00:09:24.911 ] 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.911 21:41:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.911 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.911 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.912 "name": "Existed_Raid", 00:09:24.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.912 "strip_size_kb": 64, 00:09:24.912 "state": "configuring", 00:09:24.912 "raid_level": "concat", 00:09:24.912 "superblock": false, 00:09:24.912 "num_base_bdevs": 4, 00:09:24.912 "num_base_bdevs_discovered": 2, 00:09:24.912 "num_base_bdevs_operational": 4, 00:09:24.912 "base_bdevs_list": [ 00:09:24.912 { 00:09:24.912 "name": "BaseBdev1", 00:09:24.912 "uuid": "8cbde5a3-0b5d-49e1-897c-42017f2c5dbe", 00:09:24.912 "is_configured": true, 00:09:24.912 "data_offset": 0, 00:09:24.912 "data_size": 65536 00:09:24.912 }, 00:09:24.912 { 00:09:24.912 "name": "BaseBdev2", 00:09:24.912 "uuid": "a4f9e02b-5fda-43f4-a5e8-7552a1faa405", 00:09:24.912 "is_configured": true, 00:09:24.912 "data_offset": 0, 00:09:24.912 "data_size": 65536 00:09:24.912 }, 00:09:24.912 { 00:09:24.912 "name": "BaseBdev3", 00:09:24.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.912 "is_configured": false, 00:09:24.912 "data_offset": 0, 00:09:24.912 "data_size": 0 00:09:24.912 }, 00:09:24.912 { 00:09:24.912 "name": "BaseBdev4", 00:09:24.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.912 "is_configured": false, 00:09:24.912 "data_offset": 0, 00:09:24.912 "data_size": 0 00:09:24.912 } 00:09:24.912 ] 00:09:24.912 }' 00:09:24.912 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.912 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.481 [2024-11-27 21:41:48.420969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.481 BaseBdev3 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.481 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.481 [ 00:09:25.481 { 00:09:25.481 "name": "BaseBdev3", 00:09:25.481 "aliases": [ 00:09:25.481 "cceff440-af4f-48cf-bcf4-1466c50b1db8" 00:09:25.481 ], 00:09:25.481 "product_name": "Malloc disk", 00:09:25.481 "block_size": 512, 00:09:25.481 "num_blocks": 65536, 00:09:25.481 "uuid": "cceff440-af4f-48cf-bcf4-1466c50b1db8", 00:09:25.481 "assigned_rate_limits": { 00:09:25.481 "rw_ios_per_sec": 0, 00:09:25.481 "rw_mbytes_per_sec": 0, 00:09:25.482 "r_mbytes_per_sec": 0, 00:09:25.482 "w_mbytes_per_sec": 0 00:09:25.482 }, 00:09:25.482 "claimed": true, 00:09:25.482 "claim_type": "exclusive_write", 00:09:25.482 "zoned": false, 00:09:25.482 "supported_io_types": { 00:09:25.482 "read": true, 00:09:25.482 "write": true, 00:09:25.482 "unmap": true, 00:09:25.482 "flush": true, 00:09:25.482 "reset": true, 00:09:25.482 "nvme_admin": false, 00:09:25.482 "nvme_io": false, 00:09:25.482 "nvme_io_md": false, 00:09:25.482 "write_zeroes": true, 00:09:25.482 "zcopy": true, 00:09:25.482 "get_zone_info": false, 00:09:25.482 "zone_management": false, 00:09:25.482 "zone_append": false, 00:09:25.482 "compare": false, 00:09:25.482 "compare_and_write": false, 00:09:25.482 "abort": true, 00:09:25.482 "seek_hole": false, 00:09:25.482 "seek_data": false, 00:09:25.482 "copy": true, 00:09:25.482 "nvme_iov_md": false 00:09:25.482 }, 00:09:25.482 "memory_domains": [ 00:09:25.482 { 00:09:25.482 "dma_device_id": "system", 00:09:25.482 "dma_device_type": 1 00:09:25.482 }, 00:09:25.482 { 00:09:25.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.482 "dma_device_type": 2 00:09:25.482 } 00:09:25.482 ], 00:09:25.482 "driver_specific": {} 00:09:25.482 } 00:09:25.482 ] 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.482 "name": "Existed_Raid", 00:09:25.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.482 "strip_size_kb": 64, 00:09:25.482 "state": "configuring", 00:09:25.482 "raid_level": "concat", 00:09:25.482 "superblock": false, 00:09:25.482 "num_base_bdevs": 4, 00:09:25.482 "num_base_bdevs_discovered": 3, 00:09:25.482 "num_base_bdevs_operational": 4, 00:09:25.482 "base_bdevs_list": [ 00:09:25.482 { 00:09:25.482 "name": "BaseBdev1", 00:09:25.482 "uuid": "8cbde5a3-0b5d-49e1-897c-42017f2c5dbe", 00:09:25.482 "is_configured": true, 00:09:25.482 "data_offset": 0, 00:09:25.482 "data_size": 65536 00:09:25.482 }, 00:09:25.482 { 00:09:25.482 "name": "BaseBdev2", 00:09:25.482 "uuid": "a4f9e02b-5fda-43f4-a5e8-7552a1faa405", 00:09:25.482 "is_configured": true, 00:09:25.482 "data_offset": 0, 00:09:25.482 "data_size": 65536 00:09:25.482 }, 00:09:25.482 { 00:09:25.482 "name": "BaseBdev3", 00:09:25.482 "uuid": "cceff440-af4f-48cf-bcf4-1466c50b1db8", 00:09:25.482 "is_configured": true, 00:09:25.482 "data_offset": 0, 00:09:25.482 "data_size": 65536 00:09:25.482 }, 00:09:25.482 { 00:09:25.482 "name": "BaseBdev4", 00:09:25.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.482 "is_configured": false, 00:09:25.482 "data_offset": 0, 00:09:25.482 "data_size": 0 00:09:25.482 } 00:09:25.482 ] 00:09:25.482 }' 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.482 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.057 [2024-11-27 21:41:48.898924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:26.057 [2024-11-27 21:41:48.899034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:26.057 [2024-11-27 21:41:48.899048] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:26.057 [2024-11-27 21:41:48.899392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:26.057 [2024-11-27 21:41:48.899524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:26.057 [2024-11-27 21:41:48.899542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:26.057 [2024-11-27 21:41:48.899739] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.057 BaseBdev4 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.057 [ 00:09:26.057 { 00:09:26.057 "name": "BaseBdev4", 00:09:26.057 "aliases": [ 00:09:26.057 "fd49c10a-0df3-41c6-8bee-65e4fdfe6b95" 00:09:26.057 ], 00:09:26.057 "product_name": "Malloc disk", 00:09:26.057 "block_size": 512, 00:09:26.057 "num_blocks": 65536, 00:09:26.057 "uuid": "fd49c10a-0df3-41c6-8bee-65e4fdfe6b95", 00:09:26.057 "assigned_rate_limits": { 00:09:26.057 "rw_ios_per_sec": 0, 00:09:26.057 "rw_mbytes_per_sec": 0, 00:09:26.057 "r_mbytes_per_sec": 0, 00:09:26.057 "w_mbytes_per_sec": 0 00:09:26.057 }, 00:09:26.057 "claimed": true, 00:09:26.057 "claim_type": "exclusive_write", 00:09:26.057 "zoned": false, 00:09:26.057 "supported_io_types": { 00:09:26.057 "read": true, 00:09:26.057 "write": true, 00:09:26.057 "unmap": true, 00:09:26.057 "flush": true, 00:09:26.057 "reset": true, 00:09:26.057 "nvme_admin": false, 00:09:26.057 "nvme_io": false, 00:09:26.057 "nvme_io_md": false, 00:09:26.057 "write_zeroes": true, 00:09:26.057 "zcopy": true, 00:09:26.057 "get_zone_info": false, 00:09:26.057 "zone_management": false, 00:09:26.057 "zone_append": false, 00:09:26.057 "compare": false, 00:09:26.057 "compare_and_write": false, 00:09:26.057 "abort": true, 00:09:26.057 "seek_hole": false, 00:09:26.057 "seek_data": false, 00:09:26.057 "copy": true, 00:09:26.057 "nvme_iov_md": false 00:09:26.057 }, 00:09:26.057 "memory_domains": [ 00:09:26.057 { 00:09:26.057 "dma_device_id": "system", 00:09:26.057 "dma_device_type": 1 00:09:26.057 }, 00:09:26.057 { 00:09:26.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.057 "dma_device_type": 2 00:09:26.057 } 00:09:26.057 ], 00:09:26.057 "driver_specific": {} 00:09:26.057 } 00:09:26.057 ] 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.057 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.057 "name": "Existed_Raid", 00:09:26.057 "uuid": "706c6040-749d-4f3e-a683-07adbcf48504", 00:09:26.057 "strip_size_kb": 64, 00:09:26.057 "state": "online", 00:09:26.057 "raid_level": "concat", 00:09:26.057 "superblock": false, 00:09:26.057 "num_base_bdevs": 4, 00:09:26.057 "num_base_bdevs_discovered": 4, 00:09:26.057 "num_base_bdevs_operational": 4, 00:09:26.057 "base_bdevs_list": [ 00:09:26.057 { 00:09:26.058 "name": "BaseBdev1", 00:09:26.058 "uuid": "8cbde5a3-0b5d-49e1-897c-42017f2c5dbe", 00:09:26.058 "is_configured": true, 00:09:26.058 "data_offset": 0, 00:09:26.058 "data_size": 65536 00:09:26.058 }, 00:09:26.058 { 00:09:26.058 "name": "BaseBdev2", 00:09:26.058 "uuid": "a4f9e02b-5fda-43f4-a5e8-7552a1faa405", 00:09:26.058 "is_configured": true, 00:09:26.058 "data_offset": 0, 00:09:26.058 "data_size": 65536 00:09:26.058 }, 00:09:26.058 { 00:09:26.058 "name": "BaseBdev3", 00:09:26.058 "uuid": "cceff440-af4f-48cf-bcf4-1466c50b1db8", 00:09:26.058 "is_configured": true, 00:09:26.058 "data_offset": 0, 00:09:26.058 "data_size": 65536 00:09:26.058 }, 00:09:26.058 { 00:09:26.058 "name": "BaseBdev4", 00:09:26.058 "uuid": "fd49c10a-0df3-41c6-8bee-65e4fdfe6b95", 00:09:26.058 "is_configured": true, 00:09:26.058 "data_offset": 0, 00:09:26.058 "data_size": 65536 00:09:26.058 } 00:09:26.058 ] 00:09:26.058 }' 00:09:26.058 21:41:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.058 21:41:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.327 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:26.327 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:26.327 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:26.327 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:26.327 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:26.327 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:26.327 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:26.327 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.327 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.327 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:26.327 [2024-11-27 21:41:49.362510] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.327 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.327 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:26.327 "name": "Existed_Raid", 00:09:26.327 "aliases": [ 00:09:26.327 "706c6040-749d-4f3e-a683-07adbcf48504" 00:09:26.327 ], 00:09:26.327 "product_name": "Raid Volume", 00:09:26.327 "block_size": 512, 00:09:26.327 "num_blocks": 262144, 00:09:26.327 "uuid": "706c6040-749d-4f3e-a683-07adbcf48504", 00:09:26.327 "assigned_rate_limits": { 00:09:26.327 "rw_ios_per_sec": 0, 00:09:26.327 "rw_mbytes_per_sec": 0, 00:09:26.327 "r_mbytes_per_sec": 0, 00:09:26.327 "w_mbytes_per_sec": 0 00:09:26.327 }, 00:09:26.327 "claimed": false, 00:09:26.327 "zoned": false, 00:09:26.327 "supported_io_types": { 00:09:26.327 "read": true, 00:09:26.327 "write": true, 00:09:26.327 "unmap": true, 00:09:26.327 "flush": true, 00:09:26.327 "reset": true, 00:09:26.327 "nvme_admin": false, 00:09:26.327 "nvme_io": false, 00:09:26.327 "nvme_io_md": false, 00:09:26.327 "write_zeroes": true, 00:09:26.327 "zcopy": false, 00:09:26.327 "get_zone_info": false, 00:09:26.327 "zone_management": false, 00:09:26.327 "zone_append": false, 00:09:26.327 "compare": false, 00:09:26.327 "compare_and_write": false, 00:09:26.327 "abort": false, 00:09:26.327 "seek_hole": false, 00:09:26.327 "seek_data": false, 00:09:26.327 "copy": false, 00:09:26.327 "nvme_iov_md": false 00:09:26.327 }, 00:09:26.327 "memory_domains": [ 00:09:26.327 { 00:09:26.327 "dma_device_id": "system", 00:09:26.327 "dma_device_type": 1 00:09:26.327 }, 00:09:26.327 { 00:09:26.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.327 "dma_device_type": 2 00:09:26.327 }, 00:09:26.327 { 00:09:26.327 "dma_device_id": "system", 00:09:26.327 "dma_device_type": 1 00:09:26.327 }, 00:09:26.327 { 00:09:26.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.327 "dma_device_type": 2 00:09:26.327 }, 00:09:26.327 { 00:09:26.327 "dma_device_id": "system", 00:09:26.327 "dma_device_type": 1 00:09:26.327 }, 00:09:26.327 { 00:09:26.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.327 "dma_device_type": 2 00:09:26.327 }, 00:09:26.327 { 00:09:26.327 "dma_device_id": "system", 00:09:26.327 "dma_device_type": 1 00:09:26.327 }, 00:09:26.327 { 00:09:26.327 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.327 "dma_device_type": 2 00:09:26.327 } 00:09:26.327 ], 00:09:26.327 "driver_specific": { 00:09:26.327 "raid": { 00:09:26.327 "uuid": "706c6040-749d-4f3e-a683-07adbcf48504", 00:09:26.327 "strip_size_kb": 64, 00:09:26.327 "state": "online", 00:09:26.327 "raid_level": "concat", 00:09:26.327 "superblock": false, 00:09:26.327 "num_base_bdevs": 4, 00:09:26.327 "num_base_bdevs_discovered": 4, 00:09:26.327 "num_base_bdevs_operational": 4, 00:09:26.327 "base_bdevs_list": [ 00:09:26.327 { 00:09:26.327 "name": "BaseBdev1", 00:09:26.327 "uuid": "8cbde5a3-0b5d-49e1-897c-42017f2c5dbe", 00:09:26.327 "is_configured": true, 00:09:26.327 "data_offset": 0, 00:09:26.327 "data_size": 65536 00:09:26.327 }, 00:09:26.327 { 00:09:26.327 "name": "BaseBdev2", 00:09:26.327 "uuid": "a4f9e02b-5fda-43f4-a5e8-7552a1faa405", 00:09:26.327 "is_configured": true, 00:09:26.327 "data_offset": 0, 00:09:26.327 "data_size": 65536 00:09:26.327 }, 00:09:26.327 { 00:09:26.327 "name": "BaseBdev3", 00:09:26.327 "uuid": "cceff440-af4f-48cf-bcf4-1466c50b1db8", 00:09:26.327 "is_configured": true, 00:09:26.327 "data_offset": 0, 00:09:26.327 "data_size": 65536 00:09:26.327 }, 00:09:26.327 { 00:09:26.327 "name": "BaseBdev4", 00:09:26.327 "uuid": "fd49c10a-0df3-41c6-8bee-65e4fdfe6b95", 00:09:26.327 "is_configured": true, 00:09:26.327 "data_offset": 0, 00:09:26.327 "data_size": 65536 00:09:26.327 } 00:09:26.327 ] 00:09:26.327 } 00:09:26.327 } 00:09:26.327 }' 00:09:26.327 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:26.327 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:26.327 BaseBdev2 00:09:26.327 BaseBdev3 00:09:26.328 BaseBdev4' 00:09:26.328 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.588 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.589 [2024-11-27 21:41:49.681675] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:26.589 [2024-11-27 21:41:49.681746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.589 [2024-11-27 21:41:49.681829] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.589 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.848 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.848 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.848 "name": "Existed_Raid", 00:09:26.848 "uuid": "706c6040-749d-4f3e-a683-07adbcf48504", 00:09:26.848 "strip_size_kb": 64, 00:09:26.848 "state": "offline", 00:09:26.848 "raid_level": "concat", 00:09:26.848 "superblock": false, 00:09:26.848 "num_base_bdevs": 4, 00:09:26.848 "num_base_bdevs_discovered": 3, 00:09:26.848 "num_base_bdevs_operational": 3, 00:09:26.848 "base_bdevs_list": [ 00:09:26.848 { 00:09:26.848 "name": null, 00:09:26.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.848 "is_configured": false, 00:09:26.848 "data_offset": 0, 00:09:26.848 "data_size": 65536 00:09:26.848 }, 00:09:26.848 { 00:09:26.848 "name": "BaseBdev2", 00:09:26.848 "uuid": "a4f9e02b-5fda-43f4-a5e8-7552a1faa405", 00:09:26.848 "is_configured": true, 00:09:26.848 "data_offset": 0, 00:09:26.848 "data_size": 65536 00:09:26.848 }, 00:09:26.848 { 00:09:26.848 "name": "BaseBdev3", 00:09:26.848 "uuid": "cceff440-af4f-48cf-bcf4-1466c50b1db8", 00:09:26.848 "is_configured": true, 00:09:26.848 "data_offset": 0, 00:09:26.848 "data_size": 65536 00:09:26.848 }, 00:09:26.848 { 00:09:26.848 "name": "BaseBdev4", 00:09:26.848 "uuid": "fd49c10a-0df3-41c6-8bee-65e4fdfe6b95", 00:09:26.848 "is_configured": true, 00:09:26.848 "data_offset": 0, 00:09:26.849 "data_size": 65536 00:09:26.849 } 00:09:26.849 ] 00:09:26.849 }' 00:09:26.849 21:41:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.849 21:41:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.109 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:27.109 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:27.109 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.109 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.109 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:27.109 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.109 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.109 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:27.109 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:27.109 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:27.109 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.109 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.109 [2024-11-27 21:41:50.219998] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.369 [2024-11-27 21:41:50.291257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.369 [2024-11-27 21:41:50.362003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:27.369 [2024-11-27 21:41:50.362097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.369 BaseBdev2 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.369 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.369 [ 00:09:27.369 { 00:09:27.369 "name": "BaseBdev2", 00:09:27.369 "aliases": [ 00:09:27.369 "199141a2-b12e-4041-ba97-d098997e9be1" 00:09:27.369 ], 00:09:27.369 "product_name": "Malloc disk", 00:09:27.369 "block_size": 512, 00:09:27.369 "num_blocks": 65536, 00:09:27.369 "uuid": "199141a2-b12e-4041-ba97-d098997e9be1", 00:09:27.370 "assigned_rate_limits": { 00:09:27.370 "rw_ios_per_sec": 0, 00:09:27.370 "rw_mbytes_per_sec": 0, 00:09:27.370 "r_mbytes_per_sec": 0, 00:09:27.370 "w_mbytes_per_sec": 0 00:09:27.370 }, 00:09:27.370 "claimed": false, 00:09:27.370 "zoned": false, 00:09:27.370 "supported_io_types": { 00:09:27.370 "read": true, 00:09:27.370 "write": true, 00:09:27.370 "unmap": true, 00:09:27.370 "flush": true, 00:09:27.370 "reset": true, 00:09:27.370 "nvme_admin": false, 00:09:27.370 "nvme_io": false, 00:09:27.370 "nvme_io_md": false, 00:09:27.370 "write_zeroes": true, 00:09:27.370 "zcopy": true, 00:09:27.370 "get_zone_info": false, 00:09:27.370 "zone_management": false, 00:09:27.370 "zone_append": false, 00:09:27.370 "compare": false, 00:09:27.370 "compare_and_write": false, 00:09:27.370 "abort": true, 00:09:27.370 "seek_hole": false, 00:09:27.370 "seek_data": false, 00:09:27.370 "copy": true, 00:09:27.370 "nvme_iov_md": false 00:09:27.370 }, 00:09:27.370 "memory_domains": [ 00:09:27.370 { 00:09:27.370 "dma_device_id": "system", 00:09:27.370 "dma_device_type": 1 00:09:27.370 }, 00:09:27.370 { 00:09:27.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.370 "dma_device_type": 2 00:09:27.370 } 00:09:27.370 ], 00:09:27.370 "driver_specific": {} 00:09:27.370 } 00:09:27.370 ] 00:09:27.370 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.370 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:27.370 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:27.370 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:27.370 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:27.370 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.370 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.630 BaseBdev3 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.630 [ 00:09:27.630 { 00:09:27.630 "name": "BaseBdev3", 00:09:27.630 "aliases": [ 00:09:27.630 "9903ea01-3000-4769-b1a9-a5ecea0eb5cd" 00:09:27.630 ], 00:09:27.630 "product_name": "Malloc disk", 00:09:27.630 "block_size": 512, 00:09:27.630 "num_blocks": 65536, 00:09:27.630 "uuid": "9903ea01-3000-4769-b1a9-a5ecea0eb5cd", 00:09:27.630 "assigned_rate_limits": { 00:09:27.630 "rw_ios_per_sec": 0, 00:09:27.630 "rw_mbytes_per_sec": 0, 00:09:27.630 "r_mbytes_per_sec": 0, 00:09:27.630 "w_mbytes_per_sec": 0 00:09:27.630 }, 00:09:27.630 "claimed": false, 00:09:27.630 "zoned": false, 00:09:27.630 "supported_io_types": { 00:09:27.630 "read": true, 00:09:27.630 "write": true, 00:09:27.630 "unmap": true, 00:09:27.630 "flush": true, 00:09:27.630 "reset": true, 00:09:27.630 "nvme_admin": false, 00:09:27.630 "nvme_io": false, 00:09:27.630 "nvme_io_md": false, 00:09:27.630 "write_zeroes": true, 00:09:27.630 "zcopy": true, 00:09:27.630 "get_zone_info": false, 00:09:27.630 "zone_management": false, 00:09:27.630 "zone_append": false, 00:09:27.630 "compare": false, 00:09:27.630 "compare_and_write": false, 00:09:27.630 "abort": true, 00:09:27.630 "seek_hole": false, 00:09:27.630 "seek_data": false, 00:09:27.630 "copy": true, 00:09:27.630 "nvme_iov_md": false 00:09:27.630 }, 00:09:27.630 "memory_domains": [ 00:09:27.630 { 00:09:27.630 "dma_device_id": "system", 00:09:27.630 "dma_device_type": 1 00:09:27.630 }, 00:09:27.630 { 00:09:27.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.630 "dma_device_type": 2 00:09:27.630 } 00:09:27.630 ], 00:09:27.630 "driver_specific": {} 00:09:27.630 } 00:09:27.630 ] 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.630 BaseBdev4 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.630 [ 00:09:27.630 { 00:09:27.630 "name": "BaseBdev4", 00:09:27.630 "aliases": [ 00:09:27.630 "fd9c802b-1166-4244-aa64-33e047d65c2d" 00:09:27.630 ], 00:09:27.630 "product_name": "Malloc disk", 00:09:27.630 "block_size": 512, 00:09:27.630 "num_blocks": 65536, 00:09:27.630 "uuid": "fd9c802b-1166-4244-aa64-33e047d65c2d", 00:09:27.630 "assigned_rate_limits": { 00:09:27.630 "rw_ios_per_sec": 0, 00:09:27.630 "rw_mbytes_per_sec": 0, 00:09:27.630 "r_mbytes_per_sec": 0, 00:09:27.630 "w_mbytes_per_sec": 0 00:09:27.630 }, 00:09:27.630 "claimed": false, 00:09:27.630 "zoned": false, 00:09:27.630 "supported_io_types": { 00:09:27.630 "read": true, 00:09:27.630 "write": true, 00:09:27.630 "unmap": true, 00:09:27.630 "flush": true, 00:09:27.630 "reset": true, 00:09:27.630 "nvme_admin": false, 00:09:27.630 "nvme_io": false, 00:09:27.630 "nvme_io_md": false, 00:09:27.630 "write_zeroes": true, 00:09:27.630 "zcopy": true, 00:09:27.630 "get_zone_info": false, 00:09:27.630 "zone_management": false, 00:09:27.630 "zone_append": false, 00:09:27.630 "compare": false, 00:09:27.630 "compare_and_write": false, 00:09:27.630 "abort": true, 00:09:27.630 "seek_hole": false, 00:09:27.630 "seek_data": false, 00:09:27.630 "copy": true, 00:09:27.630 "nvme_iov_md": false 00:09:27.630 }, 00:09:27.630 "memory_domains": [ 00:09:27.630 { 00:09:27.630 "dma_device_id": "system", 00:09:27.630 "dma_device_type": 1 00:09:27.630 }, 00:09:27.630 { 00:09:27.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.630 "dma_device_type": 2 00:09:27.630 } 00:09:27.630 ], 00:09:27.630 "driver_specific": {} 00:09:27.630 } 00:09:27.630 ] 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.630 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.630 [2024-11-27 21:41:50.589461] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.630 [2024-11-27 21:41:50.589556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.631 [2024-11-27 21:41:50.589618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:27.631 [2024-11-27 21:41:50.591467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.631 [2024-11-27 21:41:50.591554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.631 "name": "Existed_Raid", 00:09:27.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.631 "strip_size_kb": 64, 00:09:27.631 "state": "configuring", 00:09:27.631 "raid_level": "concat", 00:09:27.631 "superblock": false, 00:09:27.631 "num_base_bdevs": 4, 00:09:27.631 "num_base_bdevs_discovered": 3, 00:09:27.631 "num_base_bdevs_operational": 4, 00:09:27.631 "base_bdevs_list": [ 00:09:27.631 { 00:09:27.631 "name": "BaseBdev1", 00:09:27.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.631 "is_configured": false, 00:09:27.631 "data_offset": 0, 00:09:27.631 "data_size": 0 00:09:27.631 }, 00:09:27.631 { 00:09:27.631 "name": "BaseBdev2", 00:09:27.631 "uuid": "199141a2-b12e-4041-ba97-d098997e9be1", 00:09:27.631 "is_configured": true, 00:09:27.631 "data_offset": 0, 00:09:27.631 "data_size": 65536 00:09:27.631 }, 00:09:27.631 { 00:09:27.631 "name": "BaseBdev3", 00:09:27.631 "uuid": "9903ea01-3000-4769-b1a9-a5ecea0eb5cd", 00:09:27.631 "is_configured": true, 00:09:27.631 "data_offset": 0, 00:09:27.631 "data_size": 65536 00:09:27.631 }, 00:09:27.631 { 00:09:27.631 "name": "BaseBdev4", 00:09:27.631 "uuid": "fd9c802b-1166-4244-aa64-33e047d65c2d", 00:09:27.631 "is_configured": true, 00:09:27.631 "data_offset": 0, 00:09:27.631 "data_size": 65536 00:09:27.631 } 00:09:27.631 ] 00:09:27.631 }' 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.631 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.890 [2024-11-27 21:41:50.960830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.890 21:41:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.150 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.150 "name": "Existed_Raid", 00:09:28.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.150 "strip_size_kb": 64, 00:09:28.150 "state": "configuring", 00:09:28.150 "raid_level": "concat", 00:09:28.150 "superblock": false, 00:09:28.150 "num_base_bdevs": 4, 00:09:28.150 "num_base_bdevs_discovered": 2, 00:09:28.150 "num_base_bdevs_operational": 4, 00:09:28.150 "base_bdevs_list": [ 00:09:28.150 { 00:09:28.150 "name": "BaseBdev1", 00:09:28.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.150 "is_configured": false, 00:09:28.150 "data_offset": 0, 00:09:28.150 "data_size": 0 00:09:28.150 }, 00:09:28.150 { 00:09:28.150 "name": null, 00:09:28.150 "uuid": "199141a2-b12e-4041-ba97-d098997e9be1", 00:09:28.150 "is_configured": false, 00:09:28.150 "data_offset": 0, 00:09:28.150 "data_size": 65536 00:09:28.150 }, 00:09:28.150 { 00:09:28.150 "name": "BaseBdev3", 00:09:28.150 "uuid": "9903ea01-3000-4769-b1a9-a5ecea0eb5cd", 00:09:28.150 "is_configured": true, 00:09:28.150 "data_offset": 0, 00:09:28.150 "data_size": 65536 00:09:28.150 }, 00:09:28.150 { 00:09:28.150 "name": "BaseBdev4", 00:09:28.150 "uuid": "fd9c802b-1166-4244-aa64-33e047d65c2d", 00:09:28.150 "is_configured": true, 00:09:28.150 "data_offset": 0, 00:09:28.150 "data_size": 65536 00:09:28.150 } 00:09:28.150 ] 00:09:28.150 }' 00:09:28.150 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.150 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.410 [2024-11-27 21:41:51.486793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.410 BaseBdev1 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.410 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.410 [ 00:09:28.410 { 00:09:28.410 "name": "BaseBdev1", 00:09:28.410 "aliases": [ 00:09:28.410 "4f2cf510-ae2c-4417-b05b-a0af9e4b40b9" 00:09:28.410 ], 00:09:28.410 "product_name": "Malloc disk", 00:09:28.410 "block_size": 512, 00:09:28.410 "num_blocks": 65536, 00:09:28.410 "uuid": "4f2cf510-ae2c-4417-b05b-a0af9e4b40b9", 00:09:28.410 "assigned_rate_limits": { 00:09:28.410 "rw_ios_per_sec": 0, 00:09:28.410 "rw_mbytes_per_sec": 0, 00:09:28.410 "r_mbytes_per_sec": 0, 00:09:28.411 "w_mbytes_per_sec": 0 00:09:28.411 }, 00:09:28.411 "claimed": true, 00:09:28.411 "claim_type": "exclusive_write", 00:09:28.411 "zoned": false, 00:09:28.411 "supported_io_types": { 00:09:28.411 "read": true, 00:09:28.411 "write": true, 00:09:28.411 "unmap": true, 00:09:28.411 "flush": true, 00:09:28.411 "reset": true, 00:09:28.411 "nvme_admin": false, 00:09:28.411 "nvme_io": false, 00:09:28.411 "nvme_io_md": false, 00:09:28.411 "write_zeroes": true, 00:09:28.411 "zcopy": true, 00:09:28.411 "get_zone_info": false, 00:09:28.411 "zone_management": false, 00:09:28.411 "zone_append": false, 00:09:28.411 "compare": false, 00:09:28.411 "compare_and_write": false, 00:09:28.411 "abort": true, 00:09:28.411 "seek_hole": false, 00:09:28.411 "seek_data": false, 00:09:28.411 "copy": true, 00:09:28.411 "nvme_iov_md": false 00:09:28.411 }, 00:09:28.411 "memory_domains": [ 00:09:28.411 { 00:09:28.411 "dma_device_id": "system", 00:09:28.411 "dma_device_type": 1 00:09:28.411 }, 00:09:28.411 { 00:09:28.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.411 "dma_device_type": 2 00:09:28.411 } 00:09:28.411 ], 00:09:28.411 "driver_specific": {} 00:09:28.411 } 00:09:28.411 ] 00:09:28.411 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.411 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:28.411 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:28.411 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.411 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.411 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.411 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.411 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:28.411 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.411 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.411 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.411 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.670 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.670 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.670 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.670 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.670 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.670 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.670 "name": "Existed_Raid", 00:09:28.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.670 "strip_size_kb": 64, 00:09:28.670 "state": "configuring", 00:09:28.670 "raid_level": "concat", 00:09:28.670 "superblock": false, 00:09:28.670 "num_base_bdevs": 4, 00:09:28.670 "num_base_bdevs_discovered": 3, 00:09:28.670 "num_base_bdevs_operational": 4, 00:09:28.670 "base_bdevs_list": [ 00:09:28.670 { 00:09:28.670 "name": "BaseBdev1", 00:09:28.670 "uuid": "4f2cf510-ae2c-4417-b05b-a0af9e4b40b9", 00:09:28.670 "is_configured": true, 00:09:28.670 "data_offset": 0, 00:09:28.670 "data_size": 65536 00:09:28.670 }, 00:09:28.670 { 00:09:28.670 "name": null, 00:09:28.670 "uuid": "199141a2-b12e-4041-ba97-d098997e9be1", 00:09:28.670 "is_configured": false, 00:09:28.670 "data_offset": 0, 00:09:28.670 "data_size": 65536 00:09:28.670 }, 00:09:28.670 { 00:09:28.670 "name": "BaseBdev3", 00:09:28.670 "uuid": "9903ea01-3000-4769-b1a9-a5ecea0eb5cd", 00:09:28.670 "is_configured": true, 00:09:28.670 "data_offset": 0, 00:09:28.670 "data_size": 65536 00:09:28.670 }, 00:09:28.670 { 00:09:28.670 "name": "BaseBdev4", 00:09:28.670 "uuid": "fd9c802b-1166-4244-aa64-33e047d65c2d", 00:09:28.670 "is_configured": true, 00:09:28.670 "data_offset": 0, 00:09:28.670 "data_size": 65536 00:09:28.670 } 00:09:28.670 ] 00:09:28.670 }' 00:09:28.670 21:41:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.670 21:41:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.929 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:28.929 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.929 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.929 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.929 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.189 [2024-11-27 21:41:52.069863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.189 "name": "Existed_Raid", 00:09:29.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.189 "strip_size_kb": 64, 00:09:29.189 "state": "configuring", 00:09:29.189 "raid_level": "concat", 00:09:29.189 "superblock": false, 00:09:29.189 "num_base_bdevs": 4, 00:09:29.189 "num_base_bdevs_discovered": 2, 00:09:29.189 "num_base_bdevs_operational": 4, 00:09:29.189 "base_bdevs_list": [ 00:09:29.189 { 00:09:29.189 "name": "BaseBdev1", 00:09:29.189 "uuid": "4f2cf510-ae2c-4417-b05b-a0af9e4b40b9", 00:09:29.189 "is_configured": true, 00:09:29.189 "data_offset": 0, 00:09:29.189 "data_size": 65536 00:09:29.189 }, 00:09:29.189 { 00:09:29.189 "name": null, 00:09:29.189 "uuid": "199141a2-b12e-4041-ba97-d098997e9be1", 00:09:29.189 "is_configured": false, 00:09:29.189 "data_offset": 0, 00:09:29.189 "data_size": 65536 00:09:29.189 }, 00:09:29.189 { 00:09:29.189 "name": null, 00:09:29.189 "uuid": "9903ea01-3000-4769-b1a9-a5ecea0eb5cd", 00:09:29.189 "is_configured": false, 00:09:29.189 "data_offset": 0, 00:09:29.189 "data_size": 65536 00:09:29.189 }, 00:09:29.189 { 00:09:29.189 "name": "BaseBdev4", 00:09:29.189 "uuid": "fd9c802b-1166-4244-aa64-33e047d65c2d", 00:09:29.189 "is_configured": true, 00:09:29.189 "data_offset": 0, 00:09:29.189 "data_size": 65536 00:09:29.189 } 00:09:29.189 ] 00:09:29.189 }' 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.189 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.448 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.448 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.448 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.448 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:29.448 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.448 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:29.448 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:29.448 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.448 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.708 [2024-11-27 21:41:52.569037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.708 "name": "Existed_Raid", 00:09:29.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.708 "strip_size_kb": 64, 00:09:29.708 "state": "configuring", 00:09:29.708 "raid_level": "concat", 00:09:29.708 "superblock": false, 00:09:29.708 "num_base_bdevs": 4, 00:09:29.708 "num_base_bdevs_discovered": 3, 00:09:29.708 "num_base_bdevs_operational": 4, 00:09:29.708 "base_bdevs_list": [ 00:09:29.708 { 00:09:29.708 "name": "BaseBdev1", 00:09:29.708 "uuid": "4f2cf510-ae2c-4417-b05b-a0af9e4b40b9", 00:09:29.708 "is_configured": true, 00:09:29.708 "data_offset": 0, 00:09:29.708 "data_size": 65536 00:09:29.708 }, 00:09:29.708 { 00:09:29.708 "name": null, 00:09:29.708 "uuid": "199141a2-b12e-4041-ba97-d098997e9be1", 00:09:29.708 "is_configured": false, 00:09:29.708 "data_offset": 0, 00:09:29.708 "data_size": 65536 00:09:29.708 }, 00:09:29.708 { 00:09:29.708 "name": "BaseBdev3", 00:09:29.708 "uuid": "9903ea01-3000-4769-b1a9-a5ecea0eb5cd", 00:09:29.708 "is_configured": true, 00:09:29.708 "data_offset": 0, 00:09:29.708 "data_size": 65536 00:09:29.708 }, 00:09:29.708 { 00:09:29.708 "name": "BaseBdev4", 00:09:29.708 "uuid": "fd9c802b-1166-4244-aa64-33e047d65c2d", 00:09:29.708 "is_configured": true, 00:09:29.708 "data_offset": 0, 00:09:29.708 "data_size": 65536 00:09:29.708 } 00:09:29.708 ] 00:09:29.708 }' 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.708 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.968 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.968 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:29.968 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.968 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.968 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.968 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:29.968 21:41:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:29.969 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.969 21:41:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.969 [2024-11-27 21:41:53.004320] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.969 "name": "Existed_Raid", 00:09:29.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.969 "strip_size_kb": 64, 00:09:29.969 "state": "configuring", 00:09:29.969 "raid_level": "concat", 00:09:29.969 "superblock": false, 00:09:29.969 "num_base_bdevs": 4, 00:09:29.969 "num_base_bdevs_discovered": 2, 00:09:29.969 "num_base_bdevs_operational": 4, 00:09:29.969 "base_bdevs_list": [ 00:09:29.969 { 00:09:29.969 "name": null, 00:09:29.969 "uuid": "4f2cf510-ae2c-4417-b05b-a0af9e4b40b9", 00:09:29.969 "is_configured": false, 00:09:29.969 "data_offset": 0, 00:09:29.969 "data_size": 65536 00:09:29.969 }, 00:09:29.969 { 00:09:29.969 "name": null, 00:09:29.969 "uuid": "199141a2-b12e-4041-ba97-d098997e9be1", 00:09:29.969 "is_configured": false, 00:09:29.969 "data_offset": 0, 00:09:29.969 "data_size": 65536 00:09:29.969 }, 00:09:29.969 { 00:09:29.969 "name": "BaseBdev3", 00:09:29.969 "uuid": "9903ea01-3000-4769-b1a9-a5ecea0eb5cd", 00:09:29.969 "is_configured": true, 00:09:29.969 "data_offset": 0, 00:09:29.969 "data_size": 65536 00:09:29.969 }, 00:09:29.969 { 00:09:29.969 "name": "BaseBdev4", 00:09:29.969 "uuid": "fd9c802b-1166-4244-aa64-33e047d65c2d", 00:09:29.969 "is_configured": true, 00:09:29.969 "data_offset": 0, 00:09:29.969 "data_size": 65536 00:09:29.969 } 00:09:29.969 ] 00:09:29.969 }' 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.969 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 [2024-11-27 21:41:53.469936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.538 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.538 "name": "Existed_Raid", 00:09:30.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.538 "strip_size_kb": 64, 00:09:30.538 "state": "configuring", 00:09:30.538 "raid_level": "concat", 00:09:30.538 "superblock": false, 00:09:30.538 "num_base_bdevs": 4, 00:09:30.538 "num_base_bdevs_discovered": 3, 00:09:30.538 "num_base_bdevs_operational": 4, 00:09:30.538 "base_bdevs_list": [ 00:09:30.538 { 00:09:30.538 "name": null, 00:09:30.538 "uuid": "4f2cf510-ae2c-4417-b05b-a0af9e4b40b9", 00:09:30.538 "is_configured": false, 00:09:30.538 "data_offset": 0, 00:09:30.538 "data_size": 65536 00:09:30.538 }, 00:09:30.538 { 00:09:30.538 "name": "BaseBdev2", 00:09:30.538 "uuid": "199141a2-b12e-4041-ba97-d098997e9be1", 00:09:30.538 "is_configured": true, 00:09:30.538 "data_offset": 0, 00:09:30.538 "data_size": 65536 00:09:30.538 }, 00:09:30.538 { 00:09:30.538 "name": "BaseBdev3", 00:09:30.538 "uuid": "9903ea01-3000-4769-b1a9-a5ecea0eb5cd", 00:09:30.538 "is_configured": true, 00:09:30.538 "data_offset": 0, 00:09:30.539 "data_size": 65536 00:09:30.539 }, 00:09:30.539 { 00:09:30.539 "name": "BaseBdev4", 00:09:30.539 "uuid": "fd9c802b-1166-4244-aa64-33e047d65c2d", 00:09:30.539 "is_configured": true, 00:09:30.539 "data_offset": 0, 00:09:30.539 "data_size": 65536 00:09:30.539 } 00:09:30.539 ] 00:09:30.539 }' 00:09:30.539 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.539 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.109 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.109 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.109 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.109 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:31.109 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.109 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:31.109 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.109 21:41:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:31.109 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.109 21:41:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.109 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.109 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4f2cf510-ae2c-4417-b05b-a0af9e4b40b9 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.110 [2024-11-27 21:41:54.043843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:31.110 [2024-11-27 21:41:54.043940] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:31.110 [2024-11-27 21:41:54.043964] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:31.110 [2024-11-27 21:41:54.044308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:31.110 [2024-11-27 21:41:54.044470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:31.110 [2024-11-27 21:41:54.044512] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:31.110 [2024-11-27 21:41:54.044738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.110 NewBaseBdev 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.110 [ 00:09:31.110 { 00:09:31.110 "name": "NewBaseBdev", 00:09:31.110 "aliases": [ 00:09:31.110 "4f2cf510-ae2c-4417-b05b-a0af9e4b40b9" 00:09:31.110 ], 00:09:31.110 "product_name": "Malloc disk", 00:09:31.110 "block_size": 512, 00:09:31.110 "num_blocks": 65536, 00:09:31.110 "uuid": "4f2cf510-ae2c-4417-b05b-a0af9e4b40b9", 00:09:31.110 "assigned_rate_limits": { 00:09:31.110 "rw_ios_per_sec": 0, 00:09:31.110 "rw_mbytes_per_sec": 0, 00:09:31.110 "r_mbytes_per_sec": 0, 00:09:31.110 "w_mbytes_per_sec": 0 00:09:31.110 }, 00:09:31.110 "claimed": true, 00:09:31.110 "claim_type": "exclusive_write", 00:09:31.110 "zoned": false, 00:09:31.110 "supported_io_types": { 00:09:31.110 "read": true, 00:09:31.110 "write": true, 00:09:31.110 "unmap": true, 00:09:31.110 "flush": true, 00:09:31.110 "reset": true, 00:09:31.110 "nvme_admin": false, 00:09:31.110 "nvme_io": false, 00:09:31.110 "nvme_io_md": false, 00:09:31.110 "write_zeroes": true, 00:09:31.110 "zcopy": true, 00:09:31.110 "get_zone_info": false, 00:09:31.110 "zone_management": false, 00:09:31.110 "zone_append": false, 00:09:31.110 "compare": false, 00:09:31.110 "compare_and_write": false, 00:09:31.110 "abort": true, 00:09:31.110 "seek_hole": false, 00:09:31.110 "seek_data": false, 00:09:31.110 "copy": true, 00:09:31.110 "nvme_iov_md": false 00:09:31.110 }, 00:09:31.110 "memory_domains": [ 00:09:31.110 { 00:09:31.110 "dma_device_id": "system", 00:09:31.110 "dma_device_type": 1 00:09:31.110 }, 00:09:31.110 { 00:09:31.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.110 "dma_device_type": 2 00:09:31.110 } 00:09:31.110 ], 00:09:31.110 "driver_specific": {} 00:09:31.110 } 00:09:31.110 ] 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.110 "name": "Existed_Raid", 00:09:31.110 "uuid": "4a1fc57b-861c-467b-82d0-df0513e25a23", 00:09:31.110 "strip_size_kb": 64, 00:09:31.110 "state": "online", 00:09:31.110 "raid_level": "concat", 00:09:31.110 "superblock": false, 00:09:31.110 "num_base_bdevs": 4, 00:09:31.110 "num_base_bdevs_discovered": 4, 00:09:31.110 "num_base_bdevs_operational": 4, 00:09:31.110 "base_bdevs_list": [ 00:09:31.110 { 00:09:31.110 "name": "NewBaseBdev", 00:09:31.110 "uuid": "4f2cf510-ae2c-4417-b05b-a0af9e4b40b9", 00:09:31.110 "is_configured": true, 00:09:31.110 "data_offset": 0, 00:09:31.110 "data_size": 65536 00:09:31.110 }, 00:09:31.110 { 00:09:31.110 "name": "BaseBdev2", 00:09:31.110 "uuid": "199141a2-b12e-4041-ba97-d098997e9be1", 00:09:31.110 "is_configured": true, 00:09:31.110 "data_offset": 0, 00:09:31.110 "data_size": 65536 00:09:31.110 }, 00:09:31.110 { 00:09:31.110 "name": "BaseBdev3", 00:09:31.110 "uuid": "9903ea01-3000-4769-b1a9-a5ecea0eb5cd", 00:09:31.110 "is_configured": true, 00:09:31.110 "data_offset": 0, 00:09:31.110 "data_size": 65536 00:09:31.110 }, 00:09:31.110 { 00:09:31.110 "name": "BaseBdev4", 00:09:31.110 "uuid": "fd9c802b-1166-4244-aa64-33e047d65c2d", 00:09:31.110 "is_configured": true, 00:09:31.110 "data_offset": 0, 00:09:31.110 "data_size": 65536 00:09:31.110 } 00:09:31.110 ] 00:09:31.110 }' 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.110 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.370 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:31.370 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:31.370 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.370 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.370 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.370 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.370 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:31.370 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.370 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.370 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.630 [2024-11-27 21:41:54.495419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.631 "name": "Existed_Raid", 00:09:31.631 "aliases": [ 00:09:31.631 "4a1fc57b-861c-467b-82d0-df0513e25a23" 00:09:31.631 ], 00:09:31.631 "product_name": "Raid Volume", 00:09:31.631 "block_size": 512, 00:09:31.631 "num_blocks": 262144, 00:09:31.631 "uuid": "4a1fc57b-861c-467b-82d0-df0513e25a23", 00:09:31.631 "assigned_rate_limits": { 00:09:31.631 "rw_ios_per_sec": 0, 00:09:31.631 "rw_mbytes_per_sec": 0, 00:09:31.631 "r_mbytes_per_sec": 0, 00:09:31.631 "w_mbytes_per_sec": 0 00:09:31.631 }, 00:09:31.631 "claimed": false, 00:09:31.631 "zoned": false, 00:09:31.631 "supported_io_types": { 00:09:31.631 "read": true, 00:09:31.631 "write": true, 00:09:31.631 "unmap": true, 00:09:31.631 "flush": true, 00:09:31.631 "reset": true, 00:09:31.631 "nvme_admin": false, 00:09:31.631 "nvme_io": false, 00:09:31.631 "nvme_io_md": false, 00:09:31.631 "write_zeroes": true, 00:09:31.631 "zcopy": false, 00:09:31.631 "get_zone_info": false, 00:09:31.631 "zone_management": false, 00:09:31.631 "zone_append": false, 00:09:31.631 "compare": false, 00:09:31.631 "compare_and_write": false, 00:09:31.631 "abort": false, 00:09:31.631 "seek_hole": false, 00:09:31.631 "seek_data": false, 00:09:31.631 "copy": false, 00:09:31.631 "nvme_iov_md": false 00:09:31.631 }, 00:09:31.631 "memory_domains": [ 00:09:31.631 { 00:09:31.631 "dma_device_id": "system", 00:09:31.631 "dma_device_type": 1 00:09:31.631 }, 00:09:31.631 { 00:09:31.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.631 "dma_device_type": 2 00:09:31.631 }, 00:09:31.631 { 00:09:31.631 "dma_device_id": "system", 00:09:31.631 "dma_device_type": 1 00:09:31.631 }, 00:09:31.631 { 00:09:31.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.631 "dma_device_type": 2 00:09:31.631 }, 00:09:31.631 { 00:09:31.631 "dma_device_id": "system", 00:09:31.631 "dma_device_type": 1 00:09:31.631 }, 00:09:31.631 { 00:09:31.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.631 "dma_device_type": 2 00:09:31.631 }, 00:09:31.631 { 00:09:31.631 "dma_device_id": "system", 00:09:31.631 "dma_device_type": 1 00:09:31.631 }, 00:09:31.631 { 00:09:31.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.631 "dma_device_type": 2 00:09:31.631 } 00:09:31.631 ], 00:09:31.631 "driver_specific": { 00:09:31.631 "raid": { 00:09:31.631 "uuid": "4a1fc57b-861c-467b-82d0-df0513e25a23", 00:09:31.631 "strip_size_kb": 64, 00:09:31.631 "state": "online", 00:09:31.631 "raid_level": "concat", 00:09:31.631 "superblock": false, 00:09:31.631 "num_base_bdevs": 4, 00:09:31.631 "num_base_bdevs_discovered": 4, 00:09:31.631 "num_base_bdevs_operational": 4, 00:09:31.631 "base_bdevs_list": [ 00:09:31.631 { 00:09:31.631 "name": "NewBaseBdev", 00:09:31.631 "uuid": "4f2cf510-ae2c-4417-b05b-a0af9e4b40b9", 00:09:31.631 "is_configured": true, 00:09:31.631 "data_offset": 0, 00:09:31.631 "data_size": 65536 00:09:31.631 }, 00:09:31.631 { 00:09:31.631 "name": "BaseBdev2", 00:09:31.631 "uuid": "199141a2-b12e-4041-ba97-d098997e9be1", 00:09:31.631 "is_configured": true, 00:09:31.631 "data_offset": 0, 00:09:31.631 "data_size": 65536 00:09:31.631 }, 00:09:31.631 { 00:09:31.631 "name": "BaseBdev3", 00:09:31.631 "uuid": "9903ea01-3000-4769-b1a9-a5ecea0eb5cd", 00:09:31.631 "is_configured": true, 00:09:31.631 "data_offset": 0, 00:09:31.631 "data_size": 65536 00:09:31.631 }, 00:09:31.631 { 00:09:31.631 "name": "BaseBdev4", 00:09:31.631 "uuid": "fd9c802b-1166-4244-aa64-33e047d65c2d", 00:09:31.631 "is_configured": true, 00:09:31.631 "data_offset": 0, 00:09:31.631 "data_size": 65536 00:09:31.631 } 00:09:31.631 ] 00:09:31.631 } 00:09:31.631 } 00:09:31.631 }' 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:31.631 BaseBdev2 00:09:31.631 BaseBdev3 00:09:31.631 BaseBdev4' 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.631 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.892 [2024-11-27 21:41:54.774619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:31.892 [2024-11-27 21:41:54.774688] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.892 [2024-11-27 21:41:54.774761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.892 [2024-11-27 21:41:54.774836] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.892 [2024-11-27 21:41:54.774846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81887 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 81887 ']' 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 81887 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81887 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81887' 00:09:31.892 killing process with pid 81887 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 81887 00:09:31.892 [2024-11-27 21:41:54.806358] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.892 21:41:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 81887 00:09:31.892 [2024-11-27 21:41:54.845919] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:32.152 ************************************ 00:09:32.152 END TEST raid_state_function_test 00:09:32.152 ************************************ 00:09:32.152 00:09:32.152 real 0m9.445s 00:09:32.152 user 0m16.163s 00:09:32.152 sys 0m1.955s 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.152 21:41:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:09:32.152 21:41:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:32.152 21:41:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.152 21:41:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.152 ************************************ 00:09:32.152 START TEST raid_state_function_test_sb 00:09:32.152 ************************************ 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:32.152 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:32.153 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:32.153 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:32.153 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:32.153 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:32.153 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82536 00:09:32.153 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:32.153 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82536' 00:09:32.153 Process raid pid: 82536 00:09:32.153 21:41:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82536 00:09:32.153 21:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82536 ']' 00:09:32.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.153 21:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.153 21:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.153 21:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.153 21:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.153 21:41:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.153 [2024-11-27 21:41:55.218050] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:32.153 [2024-11-27 21:41:55.218177] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.413 [2024-11-27 21:41:55.371824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.413 [2024-11-27 21:41:55.397205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.413 [2024-11-27 21:41:55.439309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.413 [2024-11-27 21:41:55.439436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.982 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.982 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:32.982 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:32.982 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.982 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.982 [2024-11-27 21:41:56.053859] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.982 [2024-11-27 21:41:56.053976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.982 [2024-11-27 21:41:56.054023] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.982 [2024-11-27 21:41:56.054087] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.982 [2024-11-27 21:41:56.054120] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.982 [2024-11-27 21:41:56.054157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.982 [2024-11-27 21:41:56.054201] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:32.982 [2024-11-27 21:41:56.054255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:32.982 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.982 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:32.983 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.983 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.983 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.983 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.983 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:32.983 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.983 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.983 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.983 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.983 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.983 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.983 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.983 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.983 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.250 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.250 "name": "Existed_Raid", 00:09:33.250 "uuid": "ae5ee9b4-6153-43f4-a0a3-9a739c47a77d", 00:09:33.250 "strip_size_kb": 64, 00:09:33.250 "state": "configuring", 00:09:33.250 "raid_level": "concat", 00:09:33.250 "superblock": true, 00:09:33.250 "num_base_bdevs": 4, 00:09:33.250 "num_base_bdevs_discovered": 0, 00:09:33.250 "num_base_bdevs_operational": 4, 00:09:33.250 "base_bdevs_list": [ 00:09:33.250 { 00:09:33.250 "name": "BaseBdev1", 00:09:33.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.250 "is_configured": false, 00:09:33.250 "data_offset": 0, 00:09:33.250 "data_size": 0 00:09:33.250 }, 00:09:33.250 { 00:09:33.250 "name": "BaseBdev2", 00:09:33.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.250 "is_configured": false, 00:09:33.250 "data_offset": 0, 00:09:33.250 "data_size": 0 00:09:33.250 }, 00:09:33.250 { 00:09:33.250 "name": "BaseBdev3", 00:09:33.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.250 "is_configured": false, 00:09:33.250 "data_offset": 0, 00:09:33.250 "data_size": 0 00:09:33.250 }, 00:09:33.250 { 00:09:33.250 "name": "BaseBdev4", 00:09:33.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.250 "is_configured": false, 00:09:33.250 "data_offset": 0, 00:09:33.250 "data_size": 0 00:09:33.250 } 00:09:33.250 ] 00:09:33.250 }' 00:09:33.250 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.250 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.518 [2024-11-27 21:41:56.425117] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.518 [2024-11-27 21:41:56.425208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.518 [2024-11-27 21:41:56.437133] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.518 [2024-11-27 21:41:56.437176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.518 [2024-11-27 21:41:56.437185] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.518 [2024-11-27 21:41:56.437195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.518 [2024-11-27 21:41:56.437202] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.518 [2024-11-27 21:41:56.437210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.518 [2024-11-27 21:41:56.437216] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:33.518 [2024-11-27 21:41:56.437225] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.518 BaseBdev1 00:09:33.518 [2024-11-27 21:41:56.458095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.518 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.518 [ 00:09:33.518 { 00:09:33.518 "name": "BaseBdev1", 00:09:33.518 "aliases": [ 00:09:33.518 "a3fe201f-8982-4040-afb8-be5c39d78970" 00:09:33.518 ], 00:09:33.518 "product_name": "Malloc disk", 00:09:33.518 "block_size": 512, 00:09:33.518 "num_blocks": 65536, 00:09:33.518 "uuid": "a3fe201f-8982-4040-afb8-be5c39d78970", 00:09:33.518 "assigned_rate_limits": { 00:09:33.518 "rw_ios_per_sec": 0, 00:09:33.518 "rw_mbytes_per_sec": 0, 00:09:33.518 "r_mbytes_per_sec": 0, 00:09:33.518 "w_mbytes_per_sec": 0 00:09:33.518 }, 00:09:33.518 "claimed": true, 00:09:33.518 "claim_type": "exclusive_write", 00:09:33.518 "zoned": false, 00:09:33.518 "supported_io_types": { 00:09:33.518 "read": true, 00:09:33.518 "write": true, 00:09:33.518 "unmap": true, 00:09:33.518 "flush": true, 00:09:33.518 "reset": true, 00:09:33.518 "nvme_admin": false, 00:09:33.519 "nvme_io": false, 00:09:33.519 "nvme_io_md": false, 00:09:33.519 "write_zeroes": true, 00:09:33.519 "zcopy": true, 00:09:33.519 "get_zone_info": false, 00:09:33.519 "zone_management": false, 00:09:33.519 "zone_append": false, 00:09:33.519 "compare": false, 00:09:33.519 "compare_and_write": false, 00:09:33.519 "abort": true, 00:09:33.519 "seek_hole": false, 00:09:33.519 "seek_data": false, 00:09:33.519 "copy": true, 00:09:33.519 "nvme_iov_md": false 00:09:33.519 }, 00:09:33.519 "memory_domains": [ 00:09:33.519 { 00:09:33.519 "dma_device_id": "system", 00:09:33.519 "dma_device_type": 1 00:09:33.519 }, 00:09:33.519 { 00:09:33.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.519 "dma_device_type": 2 00:09:33.519 } 00:09:33.519 ], 00:09:33.519 "driver_specific": {} 00:09:33.519 } 00:09:33.519 ] 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.519 "name": "Existed_Raid", 00:09:33.519 "uuid": "af95dff5-7d64-4352-9f4e-ab28f14a8abe", 00:09:33.519 "strip_size_kb": 64, 00:09:33.519 "state": "configuring", 00:09:33.519 "raid_level": "concat", 00:09:33.519 "superblock": true, 00:09:33.519 "num_base_bdevs": 4, 00:09:33.519 "num_base_bdevs_discovered": 1, 00:09:33.519 "num_base_bdevs_operational": 4, 00:09:33.519 "base_bdevs_list": [ 00:09:33.519 { 00:09:33.519 "name": "BaseBdev1", 00:09:33.519 "uuid": "a3fe201f-8982-4040-afb8-be5c39d78970", 00:09:33.519 "is_configured": true, 00:09:33.519 "data_offset": 2048, 00:09:33.519 "data_size": 63488 00:09:33.519 }, 00:09:33.519 { 00:09:33.519 "name": "BaseBdev2", 00:09:33.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.519 "is_configured": false, 00:09:33.519 "data_offset": 0, 00:09:33.519 "data_size": 0 00:09:33.519 }, 00:09:33.519 { 00:09:33.519 "name": "BaseBdev3", 00:09:33.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.519 "is_configured": false, 00:09:33.519 "data_offset": 0, 00:09:33.519 "data_size": 0 00:09:33.519 }, 00:09:33.519 { 00:09:33.519 "name": "BaseBdev4", 00:09:33.519 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.519 "is_configured": false, 00:09:33.519 "data_offset": 0, 00:09:33.519 "data_size": 0 00:09:33.519 } 00:09:33.519 ] 00:09:33.519 }' 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.519 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.090 [2024-11-27 21:41:56.949329] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.090 [2024-11-27 21:41:56.949383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.090 [2024-11-27 21:41:56.961341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.090 [2024-11-27 21:41:56.963264] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.090 [2024-11-27 21:41:56.963335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.090 [2024-11-27 21:41:56.963364] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:34.090 [2024-11-27 21:41:56.963386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:34.090 [2024-11-27 21:41:56.963404] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:34.090 [2024-11-27 21:41:56.963423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.090 21:41:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.090 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.090 "name": "Existed_Raid", 00:09:34.090 "uuid": "886d1d5b-7089-4429-bfda-62bf2f4df254", 00:09:34.090 "strip_size_kb": 64, 00:09:34.090 "state": "configuring", 00:09:34.090 "raid_level": "concat", 00:09:34.090 "superblock": true, 00:09:34.090 "num_base_bdevs": 4, 00:09:34.090 "num_base_bdevs_discovered": 1, 00:09:34.090 "num_base_bdevs_operational": 4, 00:09:34.090 "base_bdevs_list": [ 00:09:34.090 { 00:09:34.090 "name": "BaseBdev1", 00:09:34.090 "uuid": "a3fe201f-8982-4040-afb8-be5c39d78970", 00:09:34.090 "is_configured": true, 00:09:34.090 "data_offset": 2048, 00:09:34.090 "data_size": 63488 00:09:34.090 }, 00:09:34.090 { 00:09:34.090 "name": "BaseBdev2", 00:09:34.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.090 "is_configured": false, 00:09:34.090 "data_offset": 0, 00:09:34.090 "data_size": 0 00:09:34.090 }, 00:09:34.090 { 00:09:34.090 "name": "BaseBdev3", 00:09:34.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.090 "is_configured": false, 00:09:34.090 "data_offset": 0, 00:09:34.090 "data_size": 0 00:09:34.090 }, 00:09:34.090 { 00:09:34.090 "name": "BaseBdev4", 00:09:34.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.090 "is_configured": false, 00:09:34.090 "data_offset": 0, 00:09:34.090 "data_size": 0 00:09:34.090 } 00:09:34.090 ] 00:09:34.090 }' 00:09:34.090 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.090 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.351 [2024-11-27 21:41:57.367619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.351 BaseBdev2 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.351 [ 00:09:34.351 { 00:09:34.351 "name": "BaseBdev2", 00:09:34.351 "aliases": [ 00:09:34.351 "2f648638-3a74-432c-845d-2738e8acbb89" 00:09:34.351 ], 00:09:34.351 "product_name": "Malloc disk", 00:09:34.351 "block_size": 512, 00:09:34.351 "num_blocks": 65536, 00:09:34.351 "uuid": "2f648638-3a74-432c-845d-2738e8acbb89", 00:09:34.351 "assigned_rate_limits": { 00:09:34.351 "rw_ios_per_sec": 0, 00:09:34.351 "rw_mbytes_per_sec": 0, 00:09:34.351 "r_mbytes_per_sec": 0, 00:09:34.351 "w_mbytes_per_sec": 0 00:09:34.351 }, 00:09:34.351 "claimed": true, 00:09:34.351 "claim_type": "exclusive_write", 00:09:34.351 "zoned": false, 00:09:34.351 "supported_io_types": { 00:09:34.351 "read": true, 00:09:34.351 "write": true, 00:09:34.351 "unmap": true, 00:09:34.351 "flush": true, 00:09:34.351 "reset": true, 00:09:34.351 "nvme_admin": false, 00:09:34.351 "nvme_io": false, 00:09:34.351 "nvme_io_md": false, 00:09:34.351 "write_zeroes": true, 00:09:34.351 "zcopy": true, 00:09:34.351 "get_zone_info": false, 00:09:34.351 "zone_management": false, 00:09:34.351 "zone_append": false, 00:09:34.351 "compare": false, 00:09:34.351 "compare_and_write": false, 00:09:34.351 "abort": true, 00:09:34.351 "seek_hole": false, 00:09:34.351 "seek_data": false, 00:09:34.351 "copy": true, 00:09:34.351 "nvme_iov_md": false 00:09:34.351 }, 00:09:34.351 "memory_domains": [ 00:09:34.351 { 00:09:34.351 "dma_device_id": "system", 00:09:34.351 "dma_device_type": 1 00:09:34.351 }, 00:09:34.351 { 00:09:34.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.351 "dma_device_type": 2 00:09:34.351 } 00:09:34.351 ], 00:09:34.351 "driver_specific": {} 00:09:34.351 } 00:09:34.351 ] 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.351 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.351 "name": "Existed_Raid", 00:09:34.351 "uuid": "886d1d5b-7089-4429-bfda-62bf2f4df254", 00:09:34.351 "strip_size_kb": 64, 00:09:34.351 "state": "configuring", 00:09:34.351 "raid_level": "concat", 00:09:34.351 "superblock": true, 00:09:34.352 "num_base_bdevs": 4, 00:09:34.352 "num_base_bdevs_discovered": 2, 00:09:34.352 "num_base_bdevs_operational": 4, 00:09:34.352 "base_bdevs_list": [ 00:09:34.352 { 00:09:34.352 "name": "BaseBdev1", 00:09:34.352 "uuid": "a3fe201f-8982-4040-afb8-be5c39d78970", 00:09:34.352 "is_configured": true, 00:09:34.352 "data_offset": 2048, 00:09:34.352 "data_size": 63488 00:09:34.352 }, 00:09:34.352 { 00:09:34.352 "name": "BaseBdev2", 00:09:34.352 "uuid": "2f648638-3a74-432c-845d-2738e8acbb89", 00:09:34.352 "is_configured": true, 00:09:34.352 "data_offset": 2048, 00:09:34.352 "data_size": 63488 00:09:34.352 }, 00:09:34.352 { 00:09:34.352 "name": "BaseBdev3", 00:09:34.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.352 "is_configured": false, 00:09:34.352 "data_offset": 0, 00:09:34.352 "data_size": 0 00:09:34.352 }, 00:09:34.352 { 00:09:34.352 "name": "BaseBdev4", 00:09:34.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.352 "is_configured": false, 00:09:34.352 "data_offset": 0, 00:09:34.352 "data_size": 0 00:09:34.352 } 00:09:34.352 ] 00:09:34.352 }' 00:09:34.352 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.352 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.921 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.921 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.921 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.921 [2024-11-27 21:41:57.823003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.921 BaseBdev3 00:09:34.921 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.921 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.922 [ 00:09:34.922 { 00:09:34.922 "name": "BaseBdev3", 00:09:34.922 "aliases": [ 00:09:34.922 "9b345f55-cdda-49fc-92bd-77d3ee5b3a13" 00:09:34.922 ], 00:09:34.922 "product_name": "Malloc disk", 00:09:34.922 "block_size": 512, 00:09:34.922 "num_blocks": 65536, 00:09:34.922 "uuid": "9b345f55-cdda-49fc-92bd-77d3ee5b3a13", 00:09:34.922 "assigned_rate_limits": { 00:09:34.922 "rw_ios_per_sec": 0, 00:09:34.922 "rw_mbytes_per_sec": 0, 00:09:34.922 "r_mbytes_per_sec": 0, 00:09:34.922 "w_mbytes_per_sec": 0 00:09:34.922 }, 00:09:34.922 "claimed": true, 00:09:34.922 "claim_type": "exclusive_write", 00:09:34.922 "zoned": false, 00:09:34.922 "supported_io_types": { 00:09:34.922 "read": true, 00:09:34.922 "write": true, 00:09:34.922 "unmap": true, 00:09:34.922 "flush": true, 00:09:34.922 "reset": true, 00:09:34.922 "nvme_admin": false, 00:09:34.922 "nvme_io": false, 00:09:34.922 "nvme_io_md": false, 00:09:34.922 "write_zeroes": true, 00:09:34.922 "zcopy": true, 00:09:34.922 "get_zone_info": false, 00:09:34.922 "zone_management": false, 00:09:34.922 "zone_append": false, 00:09:34.922 "compare": false, 00:09:34.922 "compare_and_write": false, 00:09:34.922 "abort": true, 00:09:34.922 "seek_hole": false, 00:09:34.922 "seek_data": false, 00:09:34.922 "copy": true, 00:09:34.922 "nvme_iov_md": false 00:09:34.922 }, 00:09:34.922 "memory_domains": [ 00:09:34.922 { 00:09:34.922 "dma_device_id": "system", 00:09:34.922 "dma_device_type": 1 00:09:34.922 }, 00:09:34.922 { 00:09:34.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.922 "dma_device_type": 2 00:09:34.922 } 00:09:34.922 ], 00:09:34.922 "driver_specific": {} 00:09:34.922 } 00:09:34.922 ] 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.922 "name": "Existed_Raid", 00:09:34.922 "uuid": "886d1d5b-7089-4429-bfda-62bf2f4df254", 00:09:34.922 "strip_size_kb": 64, 00:09:34.922 "state": "configuring", 00:09:34.922 "raid_level": "concat", 00:09:34.922 "superblock": true, 00:09:34.922 "num_base_bdevs": 4, 00:09:34.922 "num_base_bdevs_discovered": 3, 00:09:34.922 "num_base_bdevs_operational": 4, 00:09:34.922 "base_bdevs_list": [ 00:09:34.922 { 00:09:34.922 "name": "BaseBdev1", 00:09:34.922 "uuid": "a3fe201f-8982-4040-afb8-be5c39d78970", 00:09:34.922 "is_configured": true, 00:09:34.922 "data_offset": 2048, 00:09:34.922 "data_size": 63488 00:09:34.922 }, 00:09:34.922 { 00:09:34.922 "name": "BaseBdev2", 00:09:34.922 "uuid": "2f648638-3a74-432c-845d-2738e8acbb89", 00:09:34.922 "is_configured": true, 00:09:34.922 "data_offset": 2048, 00:09:34.922 "data_size": 63488 00:09:34.922 }, 00:09:34.922 { 00:09:34.922 "name": "BaseBdev3", 00:09:34.922 "uuid": "9b345f55-cdda-49fc-92bd-77d3ee5b3a13", 00:09:34.922 "is_configured": true, 00:09:34.922 "data_offset": 2048, 00:09:34.922 "data_size": 63488 00:09:34.922 }, 00:09:34.922 { 00:09:34.922 "name": "BaseBdev4", 00:09:34.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.922 "is_configured": false, 00:09:34.922 "data_offset": 0, 00:09:34.922 "data_size": 0 00:09:34.922 } 00:09:34.922 ] 00:09:34.922 }' 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.922 21:41:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.183 BaseBdev4 00:09:35.183 [2024-11-27 21:41:58.285345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:35.183 [2024-11-27 21:41:58.285575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:35.183 [2024-11-27 21:41:58.285590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:35.183 [2024-11-27 21:41:58.285884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:35.183 [2024-11-27 21:41:58.286031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:35.183 [2024-11-27 21:41:58.286086] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:35.183 [2024-11-27 21:41:58.286232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.183 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.444 [ 00:09:35.444 { 00:09:35.444 "name": "BaseBdev4", 00:09:35.444 "aliases": [ 00:09:35.444 "68c3b37a-ba6a-465f-a274-87fdbfcf8d9f" 00:09:35.444 ], 00:09:35.444 "product_name": "Malloc disk", 00:09:35.444 "block_size": 512, 00:09:35.444 "num_blocks": 65536, 00:09:35.444 "uuid": "68c3b37a-ba6a-465f-a274-87fdbfcf8d9f", 00:09:35.444 "assigned_rate_limits": { 00:09:35.444 "rw_ios_per_sec": 0, 00:09:35.444 "rw_mbytes_per_sec": 0, 00:09:35.444 "r_mbytes_per_sec": 0, 00:09:35.444 "w_mbytes_per_sec": 0 00:09:35.444 }, 00:09:35.444 "claimed": true, 00:09:35.444 "claim_type": "exclusive_write", 00:09:35.444 "zoned": false, 00:09:35.444 "supported_io_types": { 00:09:35.444 "read": true, 00:09:35.444 "write": true, 00:09:35.444 "unmap": true, 00:09:35.444 "flush": true, 00:09:35.444 "reset": true, 00:09:35.444 "nvme_admin": false, 00:09:35.444 "nvme_io": false, 00:09:35.444 "nvme_io_md": false, 00:09:35.444 "write_zeroes": true, 00:09:35.444 "zcopy": true, 00:09:35.444 "get_zone_info": false, 00:09:35.444 "zone_management": false, 00:09:35.444 "zone_append": false, 00:09:35.444 "compare": false, 00:09:35.444 "compare_and_write": false, 00:09:35.444 "abort": true, 00:09:35.444 "seek_hole": false, 00:09:35.444 "seek_data": false, 00:09:35.444 "copy": true, 00:09:35.444 "nvme_iov_md": false 00:09:35.444 }, 00:09:35.444 "memory_domains": [ 00:09:35.444 { 00:09:35.444 "dma_device_id": "system", 00:09:35.444 "dma_device_type": 1 00:09:35.444 }, 00:09:35.444 { 00:09:35.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.444 "dma_device_type": 2 00:09:35.444 } 00:09:35.444 ], 00:09:35.444 "driver_specific": {} 00:09:35.444 } 00:09:35.444 ] 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.444 "name": "Existed_Raid", 00:09:35.444 "uuid": "886d1d5b-7089-4429-bfda-62bf2f4df254", 00:09:35.444 "strip_size_kb": 64, 00:09:35.444 "state": "online", 00:09:35.444 "raid_level": "concat", 00:09:35.444 "superblock": true, 00:09:35.444 "num_base_bdevs": 4, 00:09:35.444 "num_base_bdevs_discovered": 4, 00:09:35.444 "num_base_bdevs_operational": 4, 00:09:35.444 "base_bdevs_list": [ 00:09:35.444 { 00:09:35.444 "name": "BaseBdev1", 00:09:35.444 "uuid": "a3fe201f-8982-4040-afb8-be5c39d78970", 00:09:35.444 "is_configured": true, 00:09:35.444 "data_offset": 2048, 00:09:35.444 "data_size": 63488 00:09:35.444 }, 00:09:35.444 { 00:09:35.444 "name": "BaseBdev2", 00:09:35.444 "uuid": "2f648638-3a74-432c-845d-2738e8acbb89", 00:09:35.444 "is_configured": true, 00:09:35.444 "data_offset": 2048, 00:09:35.444 "data_size": 63488 00:09:35.444 }, 00:09:35.444 { 00:09:35.444 "name": "BaseBdev3", 00:09:35.444 "uuid": "9b345f55-cdda-49fc-92bd-77d3ee5b3a13", 00:09:35.444 "is_configured": true, 00:09:35.444 "data_offset": 2048, 00:09:35.444 "data_size": 63488 00:09:35.444 }, 00:09:35.444 { 00:09:35.444 "name": "BaseBdev4", 00:09:35.444 "uuid": "68c3b37a-ba6a-465f-a274-87fdbfcf8d9f", 00:09:35.444 "is_configured": true, 00:09:35.444 "data_offset": 2048, 00:09:35.444 "data_size": 63488 00:09:35.444 } 00:09:35.444 ] 00:09:35.444 }' 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.444 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.704 [2024-11-27 21:41:58.725056] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.704 "name": "Existed_Raid", 00:09:35.704 "aliases": [ 00:09:35.704 "886d1d5b-7089-4429-bfda-62bf2f4df254" 00:09:35.704 ], 00:09:35.704 "product_name": "Raid Volume", 00:09:35.704 "block_size": 512, 00:09:35.704 "num_blocks": 253952, 00:09:35.704 "uuid": "886d1d5b-7089-4429-bfda-62bf2f4df254", 00:09:35.704 "assigned_rate_limits": { 00:09:35.704 "rw_ios_per_sec": 0, 00:09:35.704 "rw_mbytes_per_sec": 0, 00:09:35.704 "r_mbytes_per_sec": 0, 00:09:35.704 "w_mbytes_per_sec": 0 00:09:35.704 }, 00:09:35.704 "claimed": false, 00:09:35.704 "zoned": false, 00:09:35.704 "supported_io_types": { 00:09:35.704 "read": true, 00:09:35.704 "write": true, 00:09:35.704 "unmap": true, 00:09:35.704 "flush": true, 00:09:35.704 "reset": true, 00:09:35.704 "nvme_admin": false, 00:09:35.704 "nvme_io": false, 00:09:35.704 "nvme_io_md": false, 00:09:35.704 "write_zeroes": true, 00:09:35.704 "zcopy": false, 00:09:35.704 "get_zone_info": false, 00:09:35.704 "zone_management": false, 00:09:35.704 "zone_append": false, 00:09:35.704 "compare": false, 00:09:35.704 "compare_and_write": false, 00:09:35.704 "abort": false, 00:09:35.704 "seek_hole": false, 00:09:35.704 "seek_data": false, 00:09:35.704 "copy": false, 00:09:35.704 "nvme_iov_md": false 00:09:35.704 }, 00:09:35.704 "memory_domains": [ 00:09:35.704 { 00:09:35.704 "dma_device_id": "system", 00:09:35.704 "dma_device_type": 1 00:09:35.704 }, 00:09:35.704 { 00:09:35.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.704 "dma_device_type": 2 00:09:35.704 }, 00:09:35.704 { 00:09:35.704 "dma_device_id": "system", 00:09:35.704 "dma_device_type": 1 00:09:35.704 }, 00:09:35.704 { 00:09:35.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.704 "dma_device_type": 2 00:09:35.704 }, 00:09:35.704 { 00:09:35.704 "dma_device_id": "system", 00:09:35.704 "dma_device_type": 1 00:09:35.704 }, 00:09:35.704 { 00:09:35.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.704 "dma_device_type": 2 00:09:35.704 }, 00:09:35.704 { 00:09:35.704 "dma_device_id": "system", 00:09:35.704 "dma_device_type": 1 00:09:35.704 }, 00:09:35.704 { 00:09:35.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.704 "dma_device_type": 2 00:09:35.704 } 00:09:35.704 ], 00:09:35.704 "driver_specific": { 00:09:35.704 "raid": { 00:09:35.704 "uuid": "886d1d5b-7089-4429-bfda-62bf2f4df254", 00:09:35.704 "strip_size_kb": 64, 00:09:35.704 "state": "online", 00:09:35.704 "raid_level": "concat", 00:09:35.704 "superblock": true, 00:09:35.704 "num_base_bdevs": 4, 00:09:35.704 "num_base_bdevs_discovered": 4, 00:09:35.704 "num_base_bdevs_operational": 4, 00:09:35.704 "base_bdevs_list": [ 00:09:35.704 { 00:09:35.704 "name": "BaseBdev1", 00:09:35.704 "uuid": "a3fe201f-8982-4040-afb8-be5c39d78970", 00:09:35.704 "is_configured": true, 00:09:35.704 "data_offset": 2048, 00:09:35.704 "data_size": 63488 00:09:35.704 }, 00:09:35.704 { 00:09:35.704 "name": "BaseBdev2", 00:09:35.704 "uuid": "2f648638-3a74-432c-845d-2738e8acbb89", 00:09:35.704 "is_configured": true, 00:09:35.704 "data_offset": 2048, 00:09:35.704 "data_size": 63488 00:09:35.704 }, 00:09:35.704 { 00:09:35.704 "name": "BaseBdev3", 00:09:35.704 "uuid": "9b345f55-cdda-49fc-92bd-77d3ee5b3a13", 00:09:35.704 "is_configured": true, 00:09:35.704 "data_offset": 2048, 00:09:35.704 "data_size": 63488 00:09:35.704 }, 00:09:35.704 { 00:09:35.704 "name": "BaseBdev4", 00:09:35.704 "uuid": "68c3b37a-ba6a-465f-a274-87fdbfcf8d9f", 00:09:35.704 "is_configured": true, 00:09:35.704 "data_offset": 2048, 00:09:35.704 "data_size": 63488 00:09:35.704 } 00:09:35.704 ] 00:09:35.704 } 00:09:35.704 } 00:09:35.704 }' 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.704 BaseBdev2 00:09:35.704 BaseBdev3 00:09:35.704 BaseBdev4' 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.704 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.963 21:41:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.963 [2024-11-27 21:41:58.992264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.963 [2024-11-27 21:41:58.992293] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.963 [2024-11-27 21:41:58.992352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.963 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.963 "name": "Existed_Raid", 00:09:35.963 "uuid": "886d1d5b-7089-4429-bfda-62bf2f4df254", 00:09:35.963 "strip_size_kb": 64, 00:09:35.963 "state": "offline", 00:09:35.963 "raid_level": "concat", 00:09:35.963 "superblock": true, 00:09:35.963 "num_base_bdevs": 4, 00:09:35.963 "num_base_bdevs_discovered": 3, 00:09:35.963 "num_base_bdevs_operational": 3, 00:09:35.963 "base_bdevs_list": [ 00:09:35.963 { 00:09:35.963 "name": null, 00:09:35.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.963 "is_configured": false, 00:09:35.963 "data_offset": 0, 00:09:35.963 "data_size": 63488 00:09:35.963 }, 00:09:35.963 { 00:09:35.963 "name": "BaseBdev2", 00:09:35.963 "uuid": "2f648638-3a74-432c-845d-2738e8acbb89", 00:09:35.963 "is_configured": true, 00:09:35.963 "data_offset": 2048, 00:09:35.963 "data_size": 63488 00:09:35.963 }, 00:09:35.963 { 00:09:35.963 "name": "BaseBdev3", 00:09:35.963 "uuid": "9b345f55-cdda-49fc-92bd-77d3ee5b3a13", 00:09:35.963 "is_configured": true, 00:09:35.963 "data_offset": 2048, 00:09:35.964 "data_size": 63488 00:09:35.964 }, 00:09:35.964 { 00:09:35.964 "name": "BaseBdev4", 00:09:35.964 "uuid": "68c3b37a-ba6a-465f-a274-87fdbfcf8d9f", 00:09:35.964 "is_configured": true, 00:09:35.964 "data_offset": 2048, 00:09:35.964 "data_size": 63488 00:09:35.964 } 00:09:35.964 ] 00:09:35.964 }' 00:09:35.964 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.964 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.532 [2024-11-27 21:41:59.470893] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.532 [2024-11-27 21:41:59.542146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.532 [2024-11-27 21:41:59.593548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:36.532 [2024-11-27 21:41:59.593600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:36.532 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.792 BaseBdev2 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.792 [ 00:09:36.792 { 00:09:36.792 "name": "BaseBdev2", 00:09:36.792 "aliases": [ 00:09:36.792 "5e51cbc5-f51c-4953-ad70-2e86036de5e3" 00:09:36.792 ], 00:09:36.792 "product_name": "Malloc disk", 00:09:36.792 "block_size": 512, 00:09:36.792 "num_blocks": 65536, 00:09:36.792 "uuid": "5e51cbc5-f51c-4953-ad70-2e86036de5e3", 00:09:36.792 "assigned_rate_limits": { 00:09:36.792 "rw_ios_per_sec": 0, 00:09:36.792 "rw_mbytes_per_sec": 0, 00:09:36.792 "r_mbytes_per_sec": 0, 00:09:36.792 "w_mbytes_per_sec": 0 00:09:36.792 }, 00:09:36.792 "claimed": false, 00:09:36.792 "zoned": false, 00:09:36.792 "supported_io_types": { 00:09:36.792 "read": true, 00:09:36.792 "write": true, 00:09:36.792 "unmap": true, 00:09:36.792 "flush": true, 00:09:36.792 "reset": true, 00:09:36.792 "nvme_admin": false, 00:09:36.792 "nvme_io": false, 00:09:36.792 "nvme_io_md": false, 00:09:36.792 "write_zeroes": true, 00:09:36.792 "zcopy": true, 00:09:36.792 "get_zone_info": false, 00:09:36.792 "zone_management": false, 00:09:36.792 "zone_append": false, 00:09:36.792 "compare": false, 00:09:36.792 "compare_and_write": false, 00:09:36.792 "abort": true, 00:09:36.792 "seek_hole": false, 00:09:36.792 "seek_data": false, 00:09:36.792 "copy": true, 00:09:36.792 "nvme_iov_md": false 00:09:36.792 }, 00:09:36.792 "memory_domains": [ 00:09:36.792 { 00:09:36.792 "dma_device_id": "system", 00:09:36.792 "dma_device_type": 1 00:09:36.792 }, 00:09:36.792 { 00:09:36.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.792 "dma_device_type": 2 00:09:36.792 } 00:09:36.792 ], 00:09:36.792 "driver_specific": {} 00:09:36.792 } 00:09:36.792 ] 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.792 BaseBdev3 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.792 [ 00:09:36.792 { 00:09:36.792 "name": "BaseBdev3", 00:09:36.792 "aliases": [ 00:09:36.792 "194f3a4c-8356-4f60-88de-159a259fde6e" 00:09:36.792 ], 00:09:36.792 "product_name": "Malloc disk", 00:09:36.792 "block_size": 512, 00:09:36.792 "num_blocks": 65536, 00:09:36.792 "uuid": "194f3a4c-8356-4f60-88de-159a259fde6e", 00:09:36.792 "assigned_rate_limits": { 00:09:36.792 "rw_ios_per_sec": 0, 00:09:36.792 "rw_mbytes_per_sec": 0, 00:09:36.792 "r_mbytes_per_sec": 0, 00:09:36.792 "w_mbytes_per_sec": 0 00:09:36.792 }, 00:09:36.792 "claimed": false, 00:09:36.792 "zoned": false, 00:09:36.792 "supported_io_types": { 00:09:36.792 "read": true, 00:09:36.792 "write": true, 00:09:36.792 "unmap": true, 00:09:36.792 "flush": true, 00:09:36.792 "reset": true, 00:09:36.792 "nvme_admin": false, 00:09:36.792 "nvme_io": false, 00:09:36.792 "nvme_io_md": false, 00:09:36.792 "write_zeroes": true, 00:09:36.792 "zcopy": true, 00:09:36.792 "get_zone_info": false, 00:09:36.792 "zone_management": false, 00:09:36.792 "zone_append": false, 00:09:36.792 "compare": false, 00:09:36.792 "compare_and_write": false, 00:09:36.792 "abort": true, 00:09:36.792 "seek_hole": false, 00:09:36.792 "seek_data": false, 00:09:36.792 "copy": true, 00:09:36.792 "nvme_iov_md": false 00:09:36.792 }, 00:09:36.792 "memory_domains": [ 00:09:36.792 { 00:09:36.792 "dma_device_id": "system", 00:09:36.792 "dma_device_type": 1 00:09:36.792 }, 00:09:36.792 { 00:09:36.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.792 "dma_device_type": 2 00:09:36.792 } 00:09:36.792 ], 00:09:36.792 "driver_specific": {} 00:09:36.792 } 00:09:36.792 ] 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.792 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.793 BaseBdev4 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.793 [ 00:09:36.793 { 00:09:36.793 "name": "BaseBdev4", 00:09:36.793 "aliases": [ 00:09:36.793 "884be1e5-0c69-4243-83d4-7f3fbcc6a41c" 00:09:36.793 ], 00:09:36.793 "product_name": "Malloc disk", 00:09:36.793 "block_size": 512, 00:09:36.793 "num_blocks": 65536, 00:09:36.793 "uuid": "884be1e5-0c69-4243-83d4-7f3fbcc6a41c", 00:09:36.793 "assigned_rate_limits": { 00:09:36.793 "rw_ios_per_sec": 0, 00:09:36.793 "rw_mbytes_per_sec": 0, 00:09:36.793 "r_mbytes_per_sec": 0, 00:09:36.793 "w_mbytes_per_sec": 0 00:09:36.793 }, 00:09:36.793 "claimed": false, 00:09:36.793 "zoned": false, 00:09:36.793 "supported_io_types": { 00:09:36.793 "read": true, 00:09:36.793 "write": true, 00:09:36.793 "unmap": true, 00:09:36.793 "flush": true, 00:09:36.793 "reset": true, 00:09:36.793 "nvme_admin": false, 00:09:36.793 "nvme_io": false, 00:09:36.793 "nvme_io_md": false, 00:09:36.793 "write_zeroes": true, 00:09:36.793 "zcopy": true, 00:09:36.793 "get_zone_info": false, 00:09:36.793 "zone_management": false, 00:09:36.793 "zone_append": false, 00:09:36.793 "compare": false, 00:09:36.793 "compare_and_write": false, 00:09:36.793 "abort": true, 00:09:36.793 "seek_hole": false, 00:09:36.793 "seek_data": false, 00:09:36.793 "copy": true, 00:09:36.793 "nvme_iov_md": false 00:09:36.793 }, 00:09:36.793 "memory_domains": [ 00:09:36.793 { 00:09:36.793 "dma_device_id": "system", 00:09:36.793 "dma_device_type": 1 00:09:36.793 }, 00:09:36.793 { 00:09:36.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.793 "dma_device_type": 2 00:09:36.793 } 00:09:36.793 ], 00:09:36.793 "driver_specific": {} 00:09:36.793 } 00:09:36.793 ] 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.793 [2024-11-27 21:41:59.867271] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.793 [2024-11-27 21:41:59.867367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.793 [2024-11-27 21:41:59.867455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.793 [2024-11-27 21:41:59.869315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.793 [2024-11-27 21:41:59.869407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.793 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.052 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.052 "name": "Existed_Raid", 00:09:37.052 "uuid": "08fbca26-1ce7-4ea7-b5d3-79843e1f4635", 00:09:37.052 "strip_size_kb": 64, 00:09:37.052 "state": "configuring", 00:09:37.052 "raid_level": "concat", 00:09:37.053 "superblock": true, 00:09:37.053 "num_base_bdevs": 4, 00:09:37.053 "num_base_bdevs_discovered": 3, 00:09:37.053 "num_base_bdevs_operational": 4, 00:09:37.053 "base_bdevs_list": [ 00:09:37.053 { 00:09:37.053 "name": "BaseBdev1", 00:09:37.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.053 "is_configured": false, 00:09:37.053 "data_offset": 0, 00:09:37.053 "data_size": 0 00:09:37.053 }, 00:09:37.053 { 00:09:37.053 "name": "BaseBdev2", 00:09:37.053 "uuid": "5e51cbc5-f51c-4953-ad70-2e86036de5e3", 00:09:37.053 "is_configured": true, 00:09:37.053 "data_offset": 2048, 00:09:37.053 "data_size": 63488 00:09:37.053 }, 00:09:37.053 { 00:09:37.053 "name": "BaseBdev3", 00:09:37.053 "uuid": "194f3a4c-8356-4f60-88de-159a259fde6e", 00:09:37.053 "is_configured": true, 00:09:37.053 "data_offset": 2048, 00:09:37.053 "data_size": 63488 00:09:37.053 }, 00:09:37.053 { 00:09:37.053 "name": "BaseBdev4", 00:09:37.053 "uuid": "884be1e5-0c69-4243-83d4-7f3fbcc6a41c", 00:09:37.053 "is_configured": true, 00:09:37.053 "data_offset": 2048, 00:09:37.053 "data_size": 63488 00:09:37.053 } 00:09:37.053 ] 00:09:37.053 }' 00:09:37.053 21:41:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.053 21:41:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.312 [2024-11-27 21:42:00.290493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.312 "name": "Existed_Raid", 00:09:37.312 "uuid": "08fbca26-1ce7-4ea7-b5d3-79843e1f4635", 00:09:37.312 "strip_size_kb": 64, 00:09:37.312 "state": "configuring", 00:09:37.312 "raid_level": "concat", 00:09:37.312 "superblock": true, 00:09:37.312 "num_base_bdevs": 4, 00:09:37.312 "num_base_bdevs_discovered": 2, 00:09:37.312 "num_base_bdevs_operational": 4, 00:09:37.312 "base_bdevs_list": [ 00:09:37.312 { 00:09:37.312 "name": "BaseBdev1", 00:09:37.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.312 "is_configured": false, 00:09:37.312 "data_offset": 0, 00:09:37.312 "data_size": 0 00:09:37.312 }, 00:09:37.312 { 00:09:37.312 "name": null, 00:09:37.312 "uuid": "5e51cbc5-f51c-4953-ad70-2e86036de5e3", 00:09:37.312 "is_configured": false, 00:09:37.312 "data_offset": 0, 00:09:37.312 "data_size": 63488 00:09:37.312 }, 00:09:37.312 { 00:09:37.312 "name": "BaseBdev3", 00:09:37.312 "uuid": "194f3a4c-8356-4f60-88de-159a259fde6e", 00:09:37.312 "is_configured": true, 00:09:37.312 "data_offset": 2048, 00:09:37.312 "data_size": 63488 00:09:37.312 }, 00:09:37.312 { 00:09:37.312 "name": "BaseBdev4", 00:09:37.312 "uuid": "884be1e5-0c69-4243-83d4-7f3fbcc6a41c", 00:09:37.312 "is_configured": true, 00:09:37.312 "data_offset": 2048, 00:09:37.312 "data_size": 63488 00:09:37.312 } 00:09:37.312 ] 00:09:37.312 }' 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.312 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.881 BaseBdev1 00:09:37.881 [2024-11-27 21:42:00.820550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.881 [ 00:09:37.881 { 00:09:37.881 "name": "BaseBdev1", 00:09:37.881 "aliases": [ 00:09:37.881 "f904cdf6-841c-41de-ac5e-5280dbf6d381" 00:09:37.881 ], 00:09:37.881 "product_name": "Malloc disk", 00:09:37.881 "block_size": 512, 00:09:37.881 "num_blocks": 65536, 00:09:37.881 "uuid": "f904cdf6-841c-41de-ac5e-5280dbf6d381", 00:09:37.881 "assigned_rate_limits": { 00:09:37.881 "rw_ios_per_sec": 0, 00:09:37.881 "rw_mbytes_per_sec": 0, 00:09:37.881 "r_mbytes_per_sec": 0, 00:09:37.881 "w_mbytes_per_sec": 0 00:09:37.881 }, 00:09:37.881 "claimed": true, 00:09:37.881 "claim_type": "exclusive_write", 00:09:37.881 "zoned": false, 00:09:37.881 "supported_io_types": { 00:09:37.881 "read": true, 00:09:37.881 "write": true, 00:09:37.881 "unmap": true, 00:09:37.881 "flush": true, 00:09:37.881 "reset": true, 00:09:37.881 "nvme_admin": false, 00:09:37.881 "nvme_io": false, 00:09:37.881 "nvme_io_md": false, 00:09:37.881 "write_zeroes": true, 00:09:37.881 "zcopy": true, 00:09:37.881 "get_zone_info": false, 00:09:37.881 "zone_management": false, 00:09:37.881 "zone_append": false, 00:09:37.881 "compare": false, 00:09:37.881 "compare_and_write": false, 00:09:37.881 "abort": true, 00:09:37.881 "seek_hole": false, 00:09:37.881 "seek_data": false, 00:09:37.881 "copy": true, 00:09:37.881 "nvme_iov_md": false 00:09:37.881 }, 00:09:37.881 "memory_domains": [ 00:09:37.881 { 00:09:37.881 "dma_device_id": "system", 00:09:37.881 "dma_device_type": 1 00:09:37.881 }, 00:09:37.881 { 00:09:37.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.881 "dma_device_type": 2 00:09:37.881 } 00:09:37.881 ], 00:09:37.881 "driver_specific": {} 00:09:37.881 } 00:09:37.881 ] 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.881 "name": "Existed_Raid", 00:09:37.881 "uuid": "08fbca26-1ce7-4ea7-b5d3-79843e1f4635", 00:09:37.881 "strip_size_kb": 64, 00:09:37.881 "state": "configuring", 00:09:37.881 "raid_level": "concat", 00:09:37.881 "superblock": true, 00:09:37.881 "num_base_bdevs": 4, 00:09:37.881 "num_base_bdevs_discovered": 3, 00:09:37.881 "num_base_bdevs_operational": 4, 00:09:37.881 "base_bdevs_list": [ 00:09:37.881 { 00:09:37.881 "name": "BaseBdev1", 00:09:37.881 "uuid": "f904cdf6-841c-41de-ac5e-5280dbf6d381", 00:09:37.881 "is_configured": true, 00:09:37.881 "data_offset": 2048, 00:09:37.881 "data_size": 63488 00:09:37.881 }, 00:09:37.881 { 00:09:37.881 "name": null, 00:09:37.881 "uuid": "5e51cbc5-f51c-4953-ad70-2e86036de5e3", 00:09:37.881 "is_configured": false, 00:09:37.881 "data_offset": 0, 00:09:37.881 "data_size": 63488 00:09:37.881 }, 00:09:37.881 { 00:09:37.881 "name": "BaseBdev3", 00:09:37.881 "uuid": "194f3a4c-8356-4f60-88de-159a259fde6e", 00:09:37.881 "is_configured": true, 00:09:37.881 "data_offset": 2048, 00:09:37.881 "data_size": 63488 00:09:37.881 }, 00:09:37.881 { 00:09:37.881 "name": "BaseBdev4", 00:09:37.881 "uuid": "884be1e5-0c69-4243-83d4-7f3fbcc6a41c", 00:09:37.881 "is_configured": true, 00:09:37.881 "data_offset": 2048, 00:09:37.881 "data_size": 63488 00:09:37.881 } 00:09:37.881 ] 00:09:37.881 }' 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.881 21:42:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.449 [2024-11-27 21:42:01.339960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.449 "name": "Existed_Raid", 00:09:38.449 "uuid": "08fbca26-1ce7-4ea7-b5d3-79843e1f4635", 00:09:38.449 "strip_size_kb": 64, 00:09:38.449 "state": "configuring", 00:09:38.449 "raid_level": "concat", 00:09:38.449 "superblock": true, 00:09:38.449 "num_base_bdevs": 4, 00:09:38.449 "num_base_bdevs_discovered": 2, 00:09:38.449 "num_base_bdevs_operational": 4, 00:09:38.449 "base_bdevs_list": [ 00:09:38.449 { 00:09:38.449 "name": "BaseBdev1", 00:09:38.449 "uuid": "f904cdf6-841c-41de-ac5e-5280dbf6d381", 00:09:38.449 "is_configured": true, 00:09:38.449 "data_offset": 2048, 00:09:38.449 "data_size": 63488 00:09:38.449 }, 00:09:38.449 { 00:09:38.449 "name": null, 00:09:38.449 "uuid": "5e51cbc5-f51c-4953-ad70-2e86036de5e3", 00:09:38.449 "is_configured": false, 00:09:38.449 "data_offset": 0, 00:09:38.449 "data_size": 63488 00:09:38.449 }, 00:09:38.449 { 00:09:38.449 "name": null, 00:09:38.449 "uuid": "194f3a4c-8356-4f60-88de-159a259fde6e", 00:09:38.449 "is_configured": false, 00:09:38.449 "data_offset": 0, 00:09:38.449 "data_size": 63488 00:09:38.449 }, 00:09:38.449 { 00:09:38.449 "name": "BaseBdev4", 00:09:38.449 "uuid": "884be1e5-0c69-4243-83d4-7f3fbcc6a41c", 00:09:38.449 "is_configured": true, 00:09:38.449 "data_offset": 2048, 00:09:38.449 "data_size": 63488 00:09:38.449 } 00:09:38.449 ] 00:09:38.449 }' 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.449 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.708 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.708 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.708 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.708 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.708 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.968 [2024-11-27 21:42:01.851117] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.968 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.968 "name": "Existed_Raid", 00:09:38.969 "uuid": "08fbca26-1ce7-4ea7-b5d3-79843e1f4635", 00:09:38.969 "strip_size_kb": 64, 00:09:38.969 "state": "configuring", 00:09:38.969 "raid_level": "concat", 00:09:38.969 "superblock": true, 00:09:38.969 "num_base_bdevs": 4, 00:09:38.969 "num_base_bdevs_discovered": 3, 00:09:38.969 "num_base_bdevs_operational": 4, 00:09:38.969 "base_bdevs_list": [ 00:09:38.969 { 00:09:38.969 "name": "BaseBdev1", 00:09:38.969 "uuid": "f904cdf6-841c-41de-ac5e-5280dbf6d381", 00:09:38.969 "is_configured": true, 00:09:38.969 "data_offset": 2048, 00:09:38.969 "data_size": 63488 00:09:38.969 }, 00:09:38.969 { 00:09:38.969 "name": null, 00:09:38.969 "uuid": "5e51cbc5-f51c-4953-ad70-2e86036de5e3", 00:09:38.969 "is_configured": false, 00:09:38.969 "data_offset": 0, 00:09:38.969 "data_size": 63488 00:09:38.969 }, 00:09:38.969 { 00:09:38.969 "name": "BaseBdev3", 00:09:38.969 "uuid": "194f3a4c-8356-4f60-88de-159a259fde6e", 00:09:38.969 "is_configured": true, 00:09:38.969 "data_offset": 2048, 00:09:38.969 "data_size": 63488 00:09:38.969 }, 00:09:38.969 { 00:09:38.969 "name": "BaseBdev4", 00:09:38.969 "uuid": "884be1e5-0c69-4243-83d4-7f3fbcc6a41c", 00:09:38.969 "is_configured": true, 00:09:38.969 "data_offset": 2048, 00:09:38.969 "data_size": 63488 00:09:38.969 } 00:09:38.969 ] 00:09:38.969 }' 00:09:38.969 21:42:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.969 21:42:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.230 [2024-11-27 21:42:02.266449] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.230 "name": "Existed_Raid", 00:09:39.230 "uuid": "08fbca26-1ce7-4ea7-b5d3-79843e1f4635", 00:09:39.230 "strip_size_kb": 64, 00:09:39.230 "state": "configuring", 00:09:39.230 "raid_level": "concat", 00:09:39.230 "superblock": true, 00:09:39.230 "num_base_bdevs": 4, 00:09:39.230 "num_base_bdevs_discovered": 2, 00:09:39.230 "num_base_bdevs_operational": 4, 00:09:39.230 "base_bdevs_list": [ 00:09:39.230 { 00:09:39.230 "name": null, 00:09:39.230 "uuid": "f904cdf6-841c-41de-ac5e-5280dbf6d381", 00:09:39.230 "is_configured": false, 00:09:39.230 "data_offset": 0, 00:09:39.230 "data_size": 63488 00:09:39.230 }, 00:09:39.230 { 00:09:39.230 "name": null, 00:09:39.230 "uuid": "5e51cbc5-f51c-4953-ad70-2e86036de5e3", 00:09:39.230 "is_configured": false, 00:09:39.230 "data_offset": 0, 00:09:39.230 "data_size": 63488 00:09:39.230 }, 00:09:39.230 { 00:09:39.230 "name": "BaseBdev3", 00:09:39.230 "uuid": "194f3a4c-8356-4f60-88de-159a259fde6e", 00:09:39.230 "is_configured": true, 00:09:39.230 "data_offset": 2048, 00:09:39.230 "data_size": 63488 00:09:39.230 }, 00:09:39.230 { 00:09:39.230 "name": "BaseBdev4", 00:09:39.230 "uuid": "884be1e5-0c69-4243-83d4-7f3fbcc6a41c", 00:09:39.230 "is_configured": true, 00:09:39.230 "data_offset": 2048, 00:09:39.230 "data_size": 63488 00:09:39.230 } 00:09:39.230 ] 00:09:39.230 }' 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.230 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.800 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.800 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.800 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.800 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.800 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.800 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:39.800 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:39.800 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.800 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.800 [2024-11-27 21:42:02.800029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.800 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.800 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:09:39.800 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.801 "name": "Existed_Raid", 00:09:39.801 "uuid": "08fbca26-1ce7-4ea7-b5d3-79843e1f4635", 00:09:39.801 "strip_size_kb": 64, 00:09:39.801 "state": "configuring", 00:09:39.801 "raid_level": "concat", 00:09:39.801 "superblock": true, 00:09:39.801 "num_base_bdevs": 4, 00:09:39.801 "num_base_bdevs_discovered": 3, 00:09:39.801 "num_base_bdevs_operational": 4, 00:09:39.801 "base_bdevs_list": [ 00:09:39.801 { 00:09:39.801 "name": null, 00:09:39.801 "uuid": "f904cdf6-841c-41de-ac5e-5280dbf6d381", 00:09:39.801 "is_configured": false, 00:09:39.801 "data_offset": 0, 00:09:39.801 "data_size": 63488 00:09:39.801 }, 00:09:39.801 { 00:09:39.801 "name": "BaseBdev2", 00:09:39.801 "uuid": "5e51cbc5-f51c-4953-ad70-2e86036de5e3", 00:09:39.801 "is_configured": true, 00:09:39.801 "data_offset": 2048, 00:09:39.801 "data_size": 63488 00:09:39.801 }, 00:09:39.801 { 00:09:39.801 "name": "BaseBdev3", 00:09:39.801 "uuid": "194f3a4c-8356-4f60-88de-159a259fde6e", 00:09:39.801 "is_configured": true, 00:09:39.801 "data_offset": 2048, 00:09:39.801 "data_size": 63488 00:09:39.801 }, 00:09:39.801 { 00:09:39.801 "name": "BaseBdev4", 00:09:39.801 "uuid": "884be1e5-0c69-4243-83d4-7f3fbcc6a41c", 00:09:39.801 "is_configured": true, 00:09:39.801 "data_offset": 2048, 00:09:39.801 "data_size": 63488 00:09:39.801 } 00:09:39.801 ] 00:09:39.801 }' 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.801 21:42:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f904cdf6-841c-41de-ac5e-5280dbf6d381 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.370 [2024-11-27 21:42:03.373946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:40.370 [2024-11-27 21:42:03.374198] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:09:40.370 [2024-11-27 21:42:03.374246] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:40.370 [2024-11-27 21:42:03.374540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:09:40.370 NewBaseBdev 00:09:40.370 [2024-11-27 21:42:03.374693] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:09:40.370 [2024-11-27 21:42:03.374745] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:09:40.370 [2024-11-27 21:42:03.374915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.370 [ 00:09:40.370 { 00:09:40.370 "name": "NewBaseBdev", 00:09:40.370 "aliases": [ 00:09:40.370 "f904cdf6-841c-41de-ac5e-5280dbf6d381" 00:09:40.370 ], 00:09:40.370 "product_name": "Malloc disk", 00:09:40.370 "block_size": 512, 00:09:40.370 "num_blocks": 65536, 00:09:40.370 "uuid": "f904cdf6-841c-41de-ac5e-5280dbf6d381", 00:09:40.370 "assigned_rate_limits": { 00:09:40.370 "rw_ios_per_sec": 0, 00:09:40.370 "rw_mbytes_per_sec": 0, 00:09:40.370 "r_mbytes_per_sec": 0, 00:09:40.370 "w_mbytes_per_sec": 0 00:09:40.370 }, 00:09:40.370 "claimed": true, 00:09:40.370 "claim_type": "exclusive_write", 00:09:40.370 "zoned": false, 00:09:40.370 "supported_io_types": { 00:09:40.370 "read": true, 00:09:40.370 "write": true, 00:09:40.370 "unmap": true, 00:09:40.370 "flush": true, 00:09:40.370 "reset": true, 00:09:40.370 "nvme_admin": false, 00:09:40.370 "nvme_io": false, 00:09:40.370 "nvme_io_md": false, 00:09:40.370 "write_zeroes": true, 00:09:40.370 "zcopy": true, 00:09:40.370 "get_zone_info": false, 00:09:40.370 "zone_management": false, 00:09:40.370 "zone_append": false, 00:09:40.370 "compare": false, 00:09:40.370 "compare_and_write": false, 00:09:40.370 "abort": true, 00:09:40.370 "seek_hole": false, 00:09:40.370 "seek_data": false, 00:09:40.370 "copy": true, 00:09:40.370 "nvme_iov_md": false 00:09:40.370 }, 00:09:40.370 "memory_domains": [ 00:09:40.370 { 00:09:40.370 "dma_device_id": "system", 00:09:40.370 "dma_device_type": 1 00:09:40.370 }, 00:09:40.370 { 00:09:40.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.370 "dma_device_type": 2 00:09:40.370 } 00:09:40.370 ], 00:09:40.370 "driver_specific": {} 00:09:40.370 } 00:09:40.370 ] 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:40.370 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.371 "name": "Existed_Raid", 00:09:40.371 "uuid": "08fbca26-1ce7-4ea7-b5d3-79843e1f4635", 00:09:40.371 "strip_size_kb": 64, 00:09:40.371 "state": "online", 00:09:40.371 "raid_level": "concat", 00:09:40.371 "superblock": true, 00:09:40.371 "num_base_bdevs": 4, 00:09:40.371 "num_base_bdevs_discovered": 4, 00:09:40.371 "num_base_bdevs_operational": 4, 00:09:40.371 "base_bdevs_list": [ 00:09:40.371 { 00:09:40.371 "name": "NewBaseBdev", 00:09:40.371 "uuid": "f904cdf6-841c-41de-ac5e-5280dbf6d381", 00:09:40.371 "is_configured": true, 00:09:40.371 "data_offset": 2048, 00:09:40.371 "data_size": 63488 00:09:40.371 }, 00:09:40.371 { 00:09:40.371 "name": "BaseBdev2", 00:09:40.371 "uuid": "5e51cbc5-f51c-4953-ad70-2e86036de5e3", 00:09:40.371 "is_configured": true, 00:09:40.371 "data_offset": 2048, 00:09:40.371 "data_size": 63488 00:09:40.371 }, 00:09:40.371 { 00:09:40.371 "name": "BaseBdev3", 00:09:40.371 "uuid": "194f3a4c-8356-4f60-88de-159a259fde6e", 00:09:40.371 "is_configured": true, 00:09:40.371 "data_offset": 2048, 00:09:40.371 "data_size": 63488 00:09:40.371 }, 00:09:40.371 { 00:09:40.371 "name": "BaseBdev4", 00:09:40.371 "uuid": "884be1e5-0c69-4243-83d4-7f3fbcc6a41c", 00:09:40.371 "is_configured": true, 00:09:40.371 "data_offset": 2048, 00:09:40.371 "data_size": 63488 00:09:40.371 } 00:09:40.371 ] 00:09:40.371 }' 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.371 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.982 [2024-11-27 21:42:03.837548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.982 "name": "Existed_Raid", 00:09:40.982 "aliases": [ 00:09:40.982 "08fbca26-1ce7-4ea7-b5d3-79843e1f4635" 00:09:40.982 ], 00:09:40.982 "product_name": "Raid Volume", 00:09:40.982 "block_size": 512, 00:09:40.982 "num_blocks": 253952, 00:09:40.982 "uuid": "08fbca26-1ce7-4ea7-b5d3-79843e1f4635", 00:09:40.982 "assigned_rate_limits": { 00:09:40.982 "rw_ios_per_sec": 0, 00:09:40.982 "rw_mbytes_per_sec": 0, 00:09:40.982 "r_mbytes_per_sec": 0, 00:09:40.982 "w_mbytes_per_sec": 0 00:09:40.982 }, 00:09:40.982 "claimed": false, 00:09:40.982 "zoned": false, 00:09:40.982 "supported_io_types": { 00:09:40.982 "read": true, 00:09:40.982 "write": true, 00:09:40.982 "unmap": true, 00:09:40.982 "flush": true, 00:09:40.982 "reset": true, 00:09:40.982 "nvme_admin": false, 00:09:40.982 "nvme_io": false, 00:09:40.982 "nvme_io_md": false, 00:09:40.982 "write_zeroes": true, 00:09:40.982 "zcopy": false, 00:09:40.982 "get_zone_info": false, 00:09:40.982 "zone_management": false, 00:09:40.982 "zone_append": false, 00:09:40.982 "compare": false, 00:09:40.982 "compare_and_write": false, 00:09:40.982 "abort": false, 00:09:40.982 "seek_hole": false, 00:09:40.982 "seek_data": false, 00:09:40.982 "copy": false, 00:09:40.982 "nvme_iov_md": false 00:09:40.982 }, 00:09:40.982 "memory_domains": [ 00:09:40.982 { 00:09:40.982 "dma_device_id": "system", 00:09:40.982 "dma_device_type": 1 00:09:40.982 }, 00:09:40.982 { 00:09:40.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.982 "dma_device_type": 2 00:09:40.982 }, 00:09:40.982 { 00:09:40.982 "dma_device_id": "system", 00:09:40.982 "dma_device_type": 1 00:09:40.982 }, 00:09:40.982 { 00:09:40.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.982 "dma_device_type": 2 00:09:40.982 }, 00:09:40.982 { 00:09:40.982 "dma_device_id": "system", 00:09:40.982 "dma_device_type": 1 00:09:40.982 }, 00:09:40.982 { 00:09:40.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.982 "dma_device_type": 2 00:09:40.982 }, 00:09:40.982 { 00:09:40.982 "dma_device_id": "system", 00:09:40.982 "dma_device_type": 1 00:09:40.982 }, 00:09:40.982 { 00:09:40.982 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.982 "dma_device_type": 2 00:09:40.982 } 00:09:40.982 ], 00:09:40.982 "driver_specific": { 00:09:40.982 "raid": { 00:09:40.982 "uuid": "08fbca26-1ce7-4ea7-b5d3-79843e1f4635", 00:09:40.982 "strip_size_kb": 64, 00:09:40.982 "state": "online", 00:09:40.982 "raid_level": "concat", 00:09:40.982 "superblock": true, 00:09:40.982 "num_base_bdevs": 4, 00:09:40.982 "num_base_bdevs_discovered": 4, 00:09:40.982 "num_base_bdevs_operational": 4, 00:09:40.982 "base_bdevs_list": [ 00:09:40.982 { 00:09:40.982 "name": "NewBaseBdev", 00:09:40.982 "uuid": "f904cdf6-841c-41de-ac5e-5280dbf6d381", 00:09:40.982 "is_configured": true, 00:09:40.982 "data_offset": 2048, 00:09:40.982 "data_size": 63488 00:09:40.982 }, 00:09:40.982 { 00:09:40.982 "name": "BaseBdev2", 00:09:40.982 "uuid": "5e51cbc5-f51c-4953-ad70-2e86036de5e3", 00:09:40.982 "is_configured": true, 00:09:40.982 "data_offset": 2048, 00:09:40.982 "data_size": 63488 00:09:40.982 }, 00:09:40.982 { 00:09:40.982 "name": "BaseBdev3", 00:09:40.982 "uuid": "194f3a4c-8356-4f60-88de-159a259fde6e", 00:09:40.982 "is_configured": true, 00:09:40.982 "data_offset": 2048, 00:09:40.982 "data_size": 63488 00:09:40.982 }, 00:09:40.982 { 00:09:40.982 "name": "BaseBdev4", 00:09:40.982 "uuid": "884be1e5-0c69-4243-83d4-7f3fbcc6a41c", 00:09:40.982 "is_configured": true, 00:09:40.982 "data_offset": 2048, 00:09:40.982 "data_size": 63488 00:09:40.982 } 00:09:40.982 ] 00:09:40.982 } 00:09:40.982 } 00:09:40.982 }' 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:40.982 BaseBdev2 00:09:40.982 BaseBdev3 00:09:40.982 BaseBdev4' 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.982 21:42:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.982 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.243 [2024-11-27 21:42:04.164664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:41.243 [2024-11-27 21:42:04.164696] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.243 [2024-11-27 21:42:04.164785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.243 [2024-11-27 21:42:04.164865] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.243 [2024-11-27 21:42:04.164875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82536 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82536 ']' 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 82536 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82536 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82536' 00:09:41.243 killing process with pid 82536 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 82536 00:09:41.243 [2024-11-27 21:42:04.202064] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.243 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 82536 00:09:41.243 [2024-11-27 21:42:04.242981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.504 21:42:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:41.504 00:09:41.504 real 0m9.332s 00:09:41.504 user 0m15.959s 00:09:41.504 sys 0m1.889s 00:09:41.504 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.504 21:42:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.504 ************************************ 00:09:41.504 END TEST raid_state_function_test_sb 00:09:41.504 ************************************ 00:09:41.504 21:42:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:09:41.504 21:42:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:41.504 21:42:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.504 21:42:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.504 ************************************ 00:09:41.504 START TEST raid_superblock_test 00:09:41.504 ************************************ 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83184 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83184 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83184 ']' 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.504 21:42:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.504 [2024-11-27 21:42:04.614843] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:41.504 [2024-11-27 21:42:04.615056] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83184 ] 00:09:41.765 [2024-11-27 21:42:04.769169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.765 [2024-11-27 21:42:04.794057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.765 [2024-11-27 21:42:04.835528] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.765 [2024-11-27 21:42:04.835640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.336 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.336 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:42.336 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:42.336 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.336 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:42.336 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:42.337 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:42.337 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:42.337 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:42.337 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:42.337 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:42.337 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.337 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.597 malloc1 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.597 [2024-11-27 21:42:05.466730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:42.597 [2024-11-27 21:42:05.466861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.597 [2024-11-27 21:42:05.466908] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:09:42.597 [2024-11-27 21:42:05.466965] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.597 [2024-11-27 21:42:05.469146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.597 [2024-11-27 21:42:05.469228] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:42.597 pt1 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.597 malloc2 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.597 [2024-11-27 21:42:05.499237] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.597 [2024-11-27 21:42:05.499294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.597 [2024-11-27 21:42:05.499312] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:42.597 [2024-11-27 21:42:05.499323] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.597 [2024-11-27 21:42:05.501424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.597 [2024-11-27 21:42:05.501461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.597 pt2 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.597 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.598 malloc3 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.598 [2024-11-27 21:42:05.527677] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:42.598 [2024-11-27 21:42:05.527764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.598 [2024-11-27 21:42:05.527831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:42.598 [2024-11-27 21:42:05.527871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.598 [2024-11-27 21:42:05.529972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.598 [2024-11-27 21:42:05.530051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:42.598 pt3 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.598 malloc4 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.598 [2024-11-27 21:42:05.569333] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:42.598 [2024-11-27 21:42:05.569414] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.598 [2024-11-27 21:42:05.569462] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:42.598 [2024-11-27 21:42:05.569493] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.598 [2024-11-27 21:42:05.571523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.598 [2024-11-27 21:42:05.571606] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:42.598 pt4 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.598 [2024-11-27 21:42:05.581343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:42.598 [2024-11-27 21:42:05.583180] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.598 [2024-11-27 21:42:05.583295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:42.598 [2024-11-27 21:42:05.583383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:42.598 [2024-11-27 21:42:05.583588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:09:42.598 [2024-11-27 21:42:05.583636] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:42.598 [2024-11-27 21:42:05.583931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:42.598 [2024-11-27 21:42:05.584068] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:09:42.598 [2024-11-27 21:42:05.584079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:09:42.598 [2024-11-27 21:42:05.584213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.598 "name": "raid_bdev1", 00:09:42.598 "uuid": "4cfdb44d-fcb6-44d9-811e-5cb3b3f48af9", 00:09:42.598 "strip_size_kb": 64, 00:09:42.598 "state": "online", 00:09:42.598 "raid_level": "concat", 00:09:42.598 "superblock": true, 00:09:42.598 "num_base_bdevs": 4, 00:09:42.598 "num_base_bdevs_discovered": 4, 00:09:42.598 "num_base_bdevs_operational": 4, 00:09:42.598 "base_bdevs_list": [ 00:09:42.598 { 00:09:42.598 "name": "pt1", 00:09:42.598 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.598 "is_configured": true, 00:09:42.598 "data_offset": 2048, 00:09:42.598 "data_size": 63488 00:09:42.598 }, 00:09:42.598 { 00:09:42.598 "name": "pt2", 00:09:42.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.598 "is_configured": true, 00:09:42.598 "data_offset": 2048, 00:09:42.598 "data_size": 63488 00:09:42.598 }, 00:09:42.598 { 00:09:42.598 "name": "pt3", 00:09:42.598 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.598 "is_configured": true, 00:09:42.598 "data_offset": 2048, 00:09:42.598 "data_size": 63488 00:09:42.598 }, 00:09:42.598 { 00:09:42.598 "name": "pt4", 00:09:42.598 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:42.598 "is_configured": true, 00:09:42.598 "data_offset": 2048, 00:09:42.598 "data_size": 63488 00:09:42.598 } 00:09:42.598 ] 00:09:42.598 }' 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.598 21:42:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.169 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:43.169 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:43.169 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.169 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.169 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.169 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:43.169 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:43.169 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:43.169 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.169 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.169 [2024-11-27 21:42:06.072862] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.169 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.169 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:43.169 "name": "raid_bdev1", 00:09:43.169 "aliases": [ 00:09:43.169 "4cfdb44d-fcb6-44d9-811e-5cb3b3f48af9" 00:09:43.169 ], 00:09:43.169 "product_name": "Raid Volume", 00:09:43.169 "block_size": 512, 00:09:43.169 "num_blocks": 253952, 00:09:43.169 "uuid": "4cfdb44d-fcb6-44d9-811e-5cb3b3f48af9", 00:09:43.169 "assigned_rate_limits": { 00:09:43.169 "rw_ios_per_sec": 0, 00:09:43.169 "rw_mbytes_per_sec": 0, 00:09:43.169 "r_mbytes_per_sec": 0, 00:09:43.169 "w_mbytes_per_sec": 0 00:09:43.169 }, 00:09:43.169 "claimed": false, 00:09:43.169 "zoned": false, 00:09:43.169 "supported_io_types": { 00:09:43.169 "read": true, 00:09:43.169 "write": true, 00:09:43.169 "unmap": true, 00:09:43.169 "flush": true, 00:09:43.169 "reset": true, 00:09:43.169 "nvme_admin": false, 00:09:43.169 "nvme_io": false, 00:09:43.169 "nvme_io_md": false, 00:09:43.169 "write_zeroes": true, 00:09:43.169 "zcopy": false, 00:09:43.169 "get_zone_info": false, 00:09:43.169 "zone_management": false, 00:09:43.169 "zone_append": false, 00:09:43.169 "compare": false, 00:09:43.169 "compare_and_write": false, 00:09:43.169 "abort": false, 00:09:43.169 "seek_hole": false, 00:09:43.169 "seek_data": false, 00:09:43.169 "copy": false, 00:09:43.169 "nvme_iov_md": false 00:09:43.169 }, 00:09:43.169 "memory_domains": [ 00:09:43.169 { 00:09:43.169 "dma_device_id": "system", 00:09:43.169 "dma_device_type": 1 00:09:43.169 }, 00:09:43.169 { 00:09:43.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.169 "dma_device_type": 2 00:09:43.169 }, 00:09:43.169 { 00:09:43.169 "dma_device_id": "system", 00:09:43.169 "dma_device_type": 1 00:09:43.169 }, 00:09:43.169 { 00:09:43.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.169 "dma_device_type": 2 00:09:43.169 }, 00:09:43.169 { 00:09:43.169 "dma_device_id": "system", 00:09:43.169 "dma_device_type": 1 00:09:43.169 }, 00:09:43.169 { 00:09:43.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.169 "dma_device_type": 2 00:09:43.169 }, 00:09:43.169 { 00:09:43.169 "dma_device_id": "system", 00:09:43.169 "dma_device_type": 1 00:09:43.169 }, 00:09:43.169 { 00:09:43.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.170 "dma_device_type": 2 00:09:43.170 } 00:09:43.170 ], 00:09:43.170 "driver_specific": { 00:09:43.170 "raid": { 00:09:43.170 "uuid": "4cfdb44d-fcb6-44d9-811e-5cb3b3f48af9", 00:09:43.170 "strip_size_kb": 64, 00:09:43.170 "state": "online", 00:09:43.170 "raid_level": "concat", 00:09:43.170 "superblock": true, 00:09:43.170 "num_base_bdevs": 4, 00:09:43.170 "num_base_bdevs_discovered": 4, 00:09:43.170 "num_base_bdevs_operational": 4, 00:09:43.170 "base_bdevs_list": [ 00:09:43.170 { 00:09:43.170 "name": "pt1", 00:09:43.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.170 "is_configured": true, 00:09:43.170 "data_offset": 2048, 00:09:43.170 "data_size": 63488 00:09:43.170 }, 00:09:43.170 { 00:09:43.170 "name": "pt2", 00:09:43.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.170 "is_configured": true, 00:09:43.170 "data_offset": 2048, 00:09:43.170 "data_size": 63488 00:09:43.170 }, 00:09:43.170 { 00:09:43.170 "name": "pt3", 00:09:43.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.170 "is_configured": true, 00:09:43.170 "data_offset": 2048, 00:09:43.170 "data_size": 63488 00:09:43.170 }, 00:09:43.170 { 00:09:43.170 "name": "pt4", 00:09:43.170 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:43.170 "is_configured": true, 00:09:43.170 "data_offset": 2048, 00:09:43.170 "data_size": 63488 00:09:43.170 } 00:09:43.170 ] 00:09:43.170 } 00:09:43.170 } 00:09:43.170 }' 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:43.170 pt2 00:09:43.170 pt3 00:09:43.170 pt4' 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.170 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.430 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:43.430 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 [2024-11-27 21:42:06.396251] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4cfdb44d-fcb6-44d9-811e-5cb3b3f48af9 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4cfdb44d-fcb6-44d9-811e-5cb3b3f48af9 ']' 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 [2024-11-27 21:42:06.439914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.431 [2024-11-27 21:42:06.439945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.431 [2024-11-27 21:42:06.440019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.431 [2024-11-27 21:42:06.440099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.431 [2024-11-27 21:42:06.440129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.431 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.692 [2024-11-27 21:42:06.607649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:43.692 [2024-11-27 21:42:06.609565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:43.692 [2024-11-27 21:42:06.609615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:43.692 [2024-11-27 21:42:06.609644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:09:43.692 [2024-11-27 21:42:06.609692] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:43.692 [2024-11-27 21:42:06.609751] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:43.692 [2024-11-27 21:42:06.609774] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:43.692 [2024-11-27 21:42:06.609789] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:09:43.692 [2024-11-27 21:42:06.609817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.692 [2024-11-27 21:42:06.609832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:09:43.692 request: 00:09:43.692 { 00:09:43.692 "name": "raid_bdev1", 00:09:43.692 "raid_level": "concat", 00:09:43.692 "base_bdevs": [ 00:09:43.692 "malloc1", 00:09:43.692 "malloc2", 00:09:43.692 "malloc3", 00:09:43.692 "malloc4" 00:09:43.692 ], 00:09:43.692 "strip_size_kb": 64, 00:09:43.692 "superblock": false, 00:09:43.692 "method": "bdev_raid_create", 00:09:43.692 "req_id": 1 00:09:43.692 } 00:09:43.692 Got JSON-RPC error response 00:09:43.692 response: 00:09:43.692 { 00:09:43.692 "code": -17, 00:09:43.692 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:43.692 } 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.692 [2024-11-27 21:42:06.675493] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:43.692 [2024-11-27 21:42:06.675584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.692 [2024-11-27 21:42:06.675628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:43.692 [2024-11-27 21:42:06.675655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.692 [2024-11-27 21:42:06.677857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.692 [2024-11-27 21:42:06.677924] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:43.692 [2024-11-27 21:42:06.678015] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:43.692 [2024-11-27 21:42:06.678072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:43.692 pt1 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.692 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.693 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.693 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.693 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.693 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.693 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.693 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.693 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.693 "name": "raid_bdev1", 00:09:43.693 "uuid": "4cfdb44d-fcb6-44d9-811e-5cb3b3f48af9", 00:09:43.693 "strip_size_kb": 64, 00:09:43.693 "state": "configuring", 00:09:43.693 "raid_level": "concat", 00:09:43.693 "superblock": true, 00:09:43.693 "num_base_bdevs": 4, 00:09:43.693 "num_base_bdevs_discovered": 1, 00:09:43.693 "num_base_bdevs_operational": 4, 00:09:43.693 "base_bdevs_list": [ 00:09:43.693 { 00:09:43.693 "name": "pt1", 00:09:43.693 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.693 "is_configured": true, 00:09:43.693 "data_offset": 2048, 00:09:43.693 "data_size": 63488 00:09:43.693 }, 00:09:43.693 { 00:09:43.693 "name": null, 00:09:43.693 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.693 "is_configured": false, 00:09:43.693 "data_offset": 2048, 00:09:43.693 "data_size": 63488 00:09:43.693 }, 00:09:43.693 { 00:09:43.693 "name": null, 00:09:43.693 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.693 "is_configured": false, 00:09:43.693 "data_offset": 2048, 00:09:43.693 "data_size": 63488 00:09:43.693 }, 00:09:43.693 { 00:09:43.693 "name": null, 00:09:43.693 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:43.693 "is_configured": false, 00:09:43.693 "data_offset": 2048, 00:09:43.693 "data_size": 63488 00:09:43.693 } 00:09:43.693 ] 00:09:43.693 }' 00:09:43.693 21:42:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.693 21:42:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.264 [2024-11-27 21:42:07.150683] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.264 [2024-11-27 21:42:07.150778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.264 [2024-11-27 21:42:07.150824] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:44.264 [2024-11-27 21:42:07.150854] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.264 [2024-11-27 21:42:07.151275] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.264 [2024-11-27 21:42:07.151331] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.264 [2024-11-27 21:42:07.151444] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:44.264 [2024-11-27 21:42:07.151494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.264 pt2 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.264 [2024-11-27 21:42:07.162670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.264 "name": "raid_bdev1", 00:09:44.264 "uuid": "4cfdb44d-fcb6-44d9-811e-5cb3b3f48af9", 00:09:44.264 "strip_size_kb": 64, 00:09:44.264 "state": "configuring", 00:09:44.264 "raid_level": "concat", 00:09:44.264 "superblock": true, 00:09:44.264 "num_base_bdevs": 4, 00:09:44.264 "num_base_bdevs_discovered": 1, 00:09:44.264 "num_base_bdevs_operational": 4, 00:09:44.264 "base_bdevs_list": [ 00:09:44.264 { 00:09:44.264 "name": "pt1", 00:09:44.264 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.264 "is_configured": true, 00:09:44.264 "data_offset": 2048, 00:09:44.264 "data_size": 63488 00:09:44.264 }, 00:09:44.264 { 00:09:44.264 "name": null, 00:09:44.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.264 "is_configured": false, 00:09:44.264 "data_offset": 0, 00:09:44.264 "data_size": 63488 00:09:44.264 }, 00:09:44.264 { 00:09:44.264 "name": null, 00:09:44.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.264 "is_configured": false, 00:09:44.264 "data_offset": 2048, 00:09:44.264 "data_size": 63488 00:09:44.264 }, 00:09:44.264 { 00:09:44.264 "name": null, 00:09:44.264 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:44.264 "is_configured": false, 00:09:44.264 "data_offset": 2048, 00:09:44.264 "data_size": 63488 00:09:44.264 } 00:09:44.264 ] 00:09:44.264 }' 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.264 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.526 [2024-11-27 21:42:07.561974] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.526 [2024-11-27 21:42:07.562075] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.526 [2024-11-27 21:42:07.562108] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:44.526 [2024-11-27 21:42:07.562137] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.526 [2024-11-27 21:42:07.562601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.526 [2024-11-27 21:42:07.562662] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.526 [2024-11-27 21:42:07.562775] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:44.526 [2024-11-27 21:42:07.562860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.526 pt2 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.526 [2024-11-27 21:42:07.573925] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:44.526 [2024-11-27 21:42:07.573967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.526 [2024-11-27 21:42:07.573982] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:44.526 [2024-11-27 21:42:07.573991] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.526 [2024-11-27 21:42:07.574326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.526 [2024-11-27 21:42:07.574344] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:44.526 [2024-11-27 21:42:07.574395] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:44.526 [2024-11-27 21:42:07.574415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:44.526 pt3 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.526 [2024-11-27 21:42:07.585905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:09:44.526 [2024-11-27 21:42:07.585948] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.526 [2024-11-27 21:42:07.585976] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:44.526 [2024-11-27 21:42:07.585984] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.526 [2024-11-27 21:42:07.586261] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.526 [2024-11-27 21:42:07.586278] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:09:44.526 [2024-11-27 21:42:07.586325] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:09:44.526 [2024-11-27 21:42:07.586343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:09:44.526 [2024-11-27 21:42:07.586432] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:44.526 [2024-11-27 21:42:07.586442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:44.526 [2024-11-27 21:42:07.586647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:09:44.526 [2024-11-27 21:42:07.586755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:44.526 [2024-11-27 21:42:07.586763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:09:44.526 [2024-11-27 21:42:07.586865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.526 pt4 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.526 "name": "raid_bdev1", 00:09:44.526 "uuid": "4cfdb44d-fcb6-44d9-811e-5cb3b3f48af9", 00:09:44.526 "strip_size_kb": 64, 00:09:44.526 "state": "online", 00:09:44.526 "raid_level": "concat", 00:09:44.526 "superblock": true, 00:09:44.526 "num_base_bdevs": 4, 00:09:44.526 "num_base_bdevs_discovered": 4, 00:09:44.526 "num_base_bdevs_operational": 4, 00:09:44.526 "base_bdevs_list": [ 00:09:44.526 { 00:09:44.526 "name": "pt1", 00:09:44.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.526 "is_configured": true, 00:09:44.526 "data_offset": 2048, 00:09:44.526 "data_size": 63488 00:09:44.526 }, 00:09:44.526 { 00:09:44.526 "name": "pt2", 00:09:44.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.526 "is_configured": true, 00:09:44.526 "data_offset": 2048, 00:09:44.526 "data_size": 63488 00:09:44.526 }, 00:09:44.526 { 00:09:44.526 "name": "pt3", 00:09:44.526 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.526 "is_configured": true, 00:09:44.526 "data_offset": 2048, 00:09:44.526 "data_size": 63488 00:09:44.526 }, 00:09:44.526 { 00:09:44.526 "name": "pt4", 00:09:44.526 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:44.526 "is_configured": true, 00:09:44.526 "data_offset": 2048, 00:09:44.526 "data_size": 63488 00:09:44.526 } 00:09:44.526 ] 00:09:44.526 }' 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.526 21:42:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.096 [2024-11-27 21:42:08.037470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.096 "name": "raid_bdev1", 00:09:45.096 "aliases": [ 00:09:45.096 "4cfdb44d-fcb6-44d9-811e-5cb3b3f48af9" 00:09:45.096 ], 00:09:45.096 "product_name": "Raid Volume", 00:09:45.096 "block_size": 512, 00:09:45.096 "num_blocks": 253952, 00:09:45.096 "uuid": "4cfdb44d-fcb6-44d9-811e-5cb3b3f48af9", 00:09:45.096 "assigned_rate_limits": { 00:09:45.096 "rw_ios_per_sec": 0, 00:09:45.096 "rw_mbytes_per_sec": 0, 00:09:45.096 "r_mbytes_per_sec": 0, 00:09:45.096 "w_mbytes_per_sec": 0 00:09:45.096 }, 00:09:45.096 "claimed": false, 00:09:45.096 "zoned": false, 00:09:45.096 "supported_io_types": { 00:09:45.096 "read": true, 00:09:45.096 "write": true, 00:09:45.096 "unmap": true, 00:09:45.096 "flush": true, 00:09:45.096 "reset": true, 00:09:45.096 "nvme_admin": false, 00:09:45.096 "nvme_io": false, 00:09:45.096 "nvme_io_md": false, 00:09:45.096 "write_zeroes": true, 00:09:45.096 "zcopy": false, 00:09:45.096 "get_zone_info": false, 00:09:45.096 "zone_management": false, 00:09:45.096 "zone_append": false, 00:09:45.096 "compare": false, 00:09:45.096 "compare_and_write": false, 00:09:45.096 "abort": false, 00:09:45.096 "seek_hole": false, 00:09:45.096 "seek_data": false, 00:09:45.096 "copy": false, 00:09:45.096 "nvme_iov_md": false 00:09:45.096 }, 00:09:45.096 "memory_domains": [ 00:09:45.096 { 00:09:45.096 "dma_device_id": "system", 00:09:45.096 "dma_device_type": 1 00:09:45.096 }, 00:09:45.096 { 00:09:45.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.096 "dma_device_type": 2 00:09:45.096 }, 00:09:45.096 { 00:09:45.096 "dma_device_id": "system", 00:09:45.096 "dma_device_type": 1 00:09:45.096 }, 00:09:45.096 { 00:09:45.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.096 "dma_device_type": 2 00:09:45.096 }, 00:09:45.096 { 00:09:45.096 "dma_device_id": "system", 00:09:45.096 "dma_device_type": 1 00:09:45.096 }, 00:09:45.096 { 00:09:45.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.096 "dma_device_type": 2 00:09:45.096 }, 00:09:45.096 { 00:09:45.096 "dma_device_id": "system", 00:09:45.096 "dma_device_type": 1 00:09:45.096 }, 00:09:45.096 { 00:09:45.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.096 "dma_device_type": 2 00:09:45.096 } 00:09:45.096 ], 00:09:45.096 "driver_specific": { 00:09:45.096 "raid": { 00:09:45.096 "uuid": "4cfdb44d-fcb6-44d9-811e-5cb3b3f48af9", 00:09:45.096 "strip_size_kb": 64, 00:09:45.096 "state": "online", 00:09:45.096 "raid_level": "concat", 00:09:45.096 "superblock": true, 00:09:45.096 "num_base_bdevs": 4, 00:09:45.096 "num_base_bdevs_discovered": 4, 00:09:45.096 "num_base_bdevs_operational": 4, 00:09:45.096 "base_bdevs_list": [ 00:09:45.096 { 00:09:45.096 "name": "pt1", 00:09:45.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.096 "is_configured": true, 00:09:45.096 "data_offset": 2048, 00:09:45.096 "data_size": 63488 00:09:45.096 }, 00:09:45.096 { 00:09:45.096 "name": "pt2", 00:09:45.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.096 "is_configured": true, 00:09:45.096 "data_offset": 2048, 00:09:45.096 "data_size": 63488 00:09:45.096 }, 00:09:45.096 { 00:09:45.096 "name": "pt3", 00:09:45.096 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.096 "is_configured": true, 00:09:45.096 "data_offset": 2048, 00:09:45.096 "data_size": 63488 00:09:45.096 }, 00:09:45.096 { 00:09:45.096 "name": "pt4", 00:09:45.096 "uuid": "00000000-0000-0000-0000-000000000004", 00:09:45.096 "is_configured": true, 00:09:45.096 "data_offset": 2048, 00:09:45.096 "data_size": 63488 00:09:45.096 } 00:09:45.096 ] 00:09:45.096 } 00:09:45.096 } 00:09:45.096 }' 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:45.096 pt2 00:09:45.096 pt3 00:09:45.096 pt4' 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.096 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:45.097 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.097 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.097 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.097 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.097 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.097 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.097 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.097 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.097 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:45.097 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.097 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.357 [2024-11-27 21:42:08.328941] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4cfdb44d-fcb6-44d9-811e-5cb3b3f48af9 '!=' 4cfdb44d-fcb6-44d9-811e-5cb3b3f48af9 ']' 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83184 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83184 ']' 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83184 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83184 00:09:45.357 killing process with pid 83184 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83184' 00:09:45.357 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 83184 00:09:45.357 [2024-11-27 21:42:08.398395] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.358 [2024-11-27 21:42:08.398475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.358 [2024-11-27 21:42:08.398544] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.358 [2024-11-27 21:42:08.398556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:09:45.358 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 83184 00:09:45.358 [2024-11-27 21:42:08.440631] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.618 21:42:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:45.618 00:09:45.618 real 0m4.119s 00:09:45.618 user 0m6.529s 00:09:45.619 sys 0m0.906s 00:09:45.619 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.619 21:42:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.619 ************************************ 00:09:45.619 END TEST raid_superblock_test 00:09:45.619 ************************************ 00:09:45.619 21:42:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:09:45.619 21:42:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:45.619 21:42:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.619 21:42:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.619 ************************************ 00:09:45.619 START TEST raid_read_error_test 00:09:45.619 ************************************ 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:45.619 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:45.879 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:45.879 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.pZhvS3oBEv 00:09:45.879 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83432 00:09:45.879 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:45.879 21:42:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83432 00:09:45.879 21:42:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 83432 ']' 00:09:45.879 21:42:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.879 21:42:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.879 21:42:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.879 21:42:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.879 21:42:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.879 [2024-11-27 21:42:08.822817] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:45.879 [2024-11-27 21:42:08.822996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83432 ] 00:09:45.879 [2024-11-27 21:42:08.979810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.139 [2024-11-27 21:42:09.005047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.139 [2024-11-27 21:42:09.046515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.139 [2024-11-27 21:42:09.046626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.708 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.708 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:46.708 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.708 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:46.708 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.708 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.708 BaseBdev1_malloc 00:09:46.708 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.708 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:46.708 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.709 true 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.709 [2024-11-27 21:42:09.681618] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:46.709 [2024-11-27 21:42:09.681714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.709 [2024-11-27 21:42:09.681765] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:46.709 [2024-11-27 21:42:09.681817] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.709 [2024-11-27 21:42:09.683932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.709 [2024-11-27 21:42:09.683997] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:46.709 BaseBdev1 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.709 BaseBdev2_malloc 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.709 true 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.709 [2024-11-27 21:42:09.722158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:46.709 [2024-11-27 21:42:09.722239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.709 [2024-11-27 21:42:09.722262] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:46.709 [2024-11-27 21:42:09.722279] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.709 [2024-11-27 21:42:09.724384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.709 [2024-11-27 21:42:09.724421] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:46.709 BaseBdev2 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.709 BaseBdev3_malloc 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.709 true 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.709 [2024-11-27 21:42:09.762713] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:46.709 [2024-11-27 21:42:09.762802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.709 [2024-11-27 21:42:09.762840] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:46.709 [2024-11-27 21:42:09.762849] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.709 [2024-11-27 21:42:09.764882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.709 [2024-11-27 21:42:09.764916] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:46.709 BaseBdev3 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.709 BaseBdev4_malloc 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.709 true 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.709 [2024-11-27 21:42:09.813627] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:46.709 [2024-11-27 21:42:09.813671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.709 [2024-11-27 21:42:09.813691] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:46.709 [2024-11-27 21:42:09.813699] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.709 [2024-11-27 21:42:09.815711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.709 [2024-11-27 21:42:09.815746] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:46.709 BaseBdev4 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.709 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.709 [2024-11-27 21:42:09.825653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:46.709 [2024-11-27 21:42:09.827508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:46.709 [2024-11-27 21:42:09.827624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.709 [2024-11-27 21:42:09.827713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:46.709 [2024-11-27 21:42:09.827973] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:46.709 [2024-11-27 21:42:09.828026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:46.709 [2024-11-27 21:42:09.828369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:09:46.709 [2024-11-27 21:42:09.828562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:46.709 [2024-11-27 21:42:09.828610] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:46.709 [2024-11-27 21:42:09.828823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.970 "name": "raid_bdev1", 00:09:46.970 "uuid": "35dcd337-8464-430f-a600-4c8e4735ac5d", 00:09:46.970 "strip_size_kb": 64, 00:09:46.970 "state": "online", 00:09:46.970 "raid_level": "concat", 00:09:46.970 "superblock": true, 00:09:46.970 "num_base_bdevs": 4, 00:09:46.970 "num_base_bdevs_discovered": 4, 00:09:46.970 "num_base_bdevs_operational": 4, 00:09:46.970 "base_bdevs_list": [ 00:09:46.970 { 00:09:46.970 "name": "BaseBdev1", 00:09:46.970 "uuid": "011abff7-ea79-5fc1-89fa-eda7cc0ea2c0", 00:09:46.970 "is_configured": true, 00:09:46.970 "data_offset": 2048, 00:09:46.970 "data_size": 63488 00:09:46.970 }, 00:09:46.970 { 00:09:46.970 "name": "BaseBdev2", 00:09:46.970 "uuid": "ecca195e-cb29-5180-89ad-3dda2eddb8a5", 00:09:46.970 "is_configured": true, 00:09:46.970 "data_offset": 2048, 00:09:46.970 "data_size": 63488 00:09:46.970 }, 00:09:46.970 { 00:09:46.970 "name": "BaseBdev3", 00:09:46.970 "uuid": "47da3d08-aac5-5d80-86db-af9fa9b4ffa4", 00:09:46.970 "is_configured": true, 00:09:46.970 "data_offset": 2048, 00:09:46.970 "data_size": 63488 00:09:46.970 }, 00:09:46.970 { 00:09:46.970 "name": "BaseBdev4", 00:09:46.970 "uuid": "4f8eeaee-e594-51c4-9580-f6dd6e60bf02", 00:09:46.970 "is_configured": true, 00:09:46.970 "data_offset": 2048, 00:09:46.970 "data_size": 63488 00:09:46.970 } 00:09:46.970 ] 00:09:46.970 }' 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.970 21:42:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.229 21:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:47.230 21:42:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:47.490 [2024-11-27 21:42:10.361146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.430 "name": "raid_bdev1", 00:09:48.430 "uuid": "35dcd337-8464-430f-a600-4c8e4735ac5d", 00:09:48.430 "strip_size_kb": 64, 00:09:48.430 "state": "online", 00:09:48.430 "raid_level": "concat", 00:09:48.430 "superblock": true, 00:09:48.430 "num_base_bdevs": 4, 00:09:48.430 "num_base_bdevs_discovered": 4, 00:09:48.430 "num_base_bdevs_operational": 4, 00:09:48.430 "base_bdevs_list": [ 00:09:48.430 { 00:09:48.430 "name": "BaseBdev1", 00:09:48.430 "uuid": "011abff7-ea79-5fc1-89fa-eda7cc0ea2c0", 00:09:48.430 "is_configured": true, 00:09:48.430 "data_offset": 2048, 00:09:48.430 "data_size": 63488 00:09:48.430 }, 00:09:48.430 { 00:09:48.430 "name": "BaseBdev2", 00:09:48.430 "uuid": "ecca195e-cb29-5180-89ad-3dda2eddb8a5", 00:09:48.430 "is_configured": true, 00:09:48.430 "data_offset": 2048, 00:09:48.430 "data_size": 63488 00:09:48.430 }, 00:09:48.430 { 00:09:48.430 "name": "BaseBdev3", 00:09:48.430 "uuid": "47da3d08-aac5-5d80-86db-af9fa9b4ffa4", 00:09:48.430 "is_configured": true, 00:09:48.430 "data_offset": 2048, 00:09:48.430 "data_size": 63488 00:09:48.430 }, 00:09:48.430 { 00:09:48.430 "name": "BaseBdev4", 00:09:48.430 "uuid": "4f8eeaee-e594-51c4-9580-f6dd6e60bf02", 00:09:48.430 "is_configured": true, 00:09:48.430 "data_offset": 2048, 00:09:48.430 "data_size": 63488 00:09:48.430 } 00:09:48.430 ] 00:09:48.430 }' 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.430 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.691 [2024-11-27 21:42:11.716747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:48.691 [2024-11-27 21:42:11.716835] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:48.691 [2024-11-27 21:42:11.719383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:48.691 [2024-11-27 21:42:11.719505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.691 [2024-11-27 21:42:11.719574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:48.691 [2024-11-27 21:42:11.719635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:48.691 { 00:09:48.691 "results": [ 00:09:48.691 { 00:09:48.691 "job": "raid_bdev1", 00:09:48.691 "core_mask": "0x1", 00:09:48.691 "workload": "randrw", 00:09:48.691 "percentage": 50, 00:09:48.691 "status": "finished", 00:09:48.691 "queue_depth": 1, 00:09:48.691 "io_size": 131072, 00:09:48.691 "runtime": 1.356584, 00:09:48.691 "iops": 16271.753168252022, 00:09:48.691 "mibps": 2033.9691460315028, 00:09:48.691 "io_failed": 1, 00:09:48.691 "io_timeout": 0, 00:09:48.691 "avg_latency_us": 84.96175455844754, 00:09:48.691 "min_latency_us": 25.6, 00:09:48.691 "max_latency_us": 1359.3711790393013 00:09:48.691 } 00:09:48.691 ], 00:09:48.691 "core_count": 1 00:09:48.691 } 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83432 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 83432 ']' 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 83432 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83432 00:09:48.691 killing process with pid 83432 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83432' 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 83432 00:09:48.691 [2024-11-27 21:42:11.764426] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:48.691 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 83432 00:09:48.691 [2024-11-27 21:42:11.798218] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.952 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.pZhvS3oBEv 00:09:48.952 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:48.952 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:48.952 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:48.952 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:48.952 ************************************ 00:09:48.952 END TEST raid_read_error_test 00:09:48.952 ************************************ 00:09:48.952 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.952 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:48.952 21:42:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:48.952 00:09:48.952 real 0m3.276s 00:09:48.952 user 0m4.136s 00:09:48.952 sys 0m0.522s 00:09:48.952 21:42:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.952 21:42:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.952 21:42:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:09:48.952 21:42:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:48.952 21:42:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.952 21:42:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.952 ************************************ 00:09:48.952 START TEST raid_write_error_test 00:09:48.952 ************************************ 00:09:48.952 21:42:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:09:48.952 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:48.952 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:09:48.952 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wwUZlnVPli 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=83561 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 83561 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 83561 ']' 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.212 21:42:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.212 [2024-11-27 21:42:12.173865] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:49.212 [2024-11-27 21:42:12.174054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83561 ] 00:09:49.212 [2024-11-27 21:42:12.308414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.472 [2024-11-27 21:42:12.333573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.472 [2024-11-27 21:42:12.375162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.472 [2024-11-27 21:42:12.375195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.040 BaseBdev1_malloc 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.040 true 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.040 [2024-11-27 21:42:13.038172] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:50.040 [2024-11-27 21:42:13.038270] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.040 [2024-11-27 21:42:13.038307] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:09:50.040 [2024-11-27 21:42:13.038334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.040 [2024-11-27 21:42:13.040442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.040 [2024-11-27 21:42:13.040523] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:50.040 BaseBdev1 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.040 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.041 BaseBdev2_malloc 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.041 true 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.041 [2024-11-27 21:42:13.078591] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:50.041 [2024-11-27 21:42:13.078686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.041 [2024-11-27 21:42:13.078720] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:09:50.041 [2024-11-27 21:42:13.078756] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.041 [2024-11-27 21:42:13.080818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.041 [2024-11-27 21:42:13.080882] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:50.041 BaseBdev2 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.041 BaseBdev3_malloc 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.041 true 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.041 [2024-11-27 21:42:13.118895] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:50.041 [2024-11-27 21:42:13.118974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.041 [2024-11-27 21:42:13.119009] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:09:50.041 [2024-11-27 21:42:13.119055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.041 [2024-11-27 21:42:13.121070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.041 [2024-11-27 21:42:13.121135] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:50.041 BaseBdev3 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.041 BaseBdev4_malloc 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.041 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.301 true 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.301 [2024-11-27 21:42:13.175904] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:09:50.301 [2024-11-27 21:42:13.176006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.301 [2024-11-27 21:42:13.176051] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:50.301 [2024-11-27 21:42:13.176094] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.301 [2024-11-27 21:42:13.178476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.301 [2024-11-27 21:42:13.178554] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:09:50.301 BaseBdev4 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.301 [2024-11-27 21:42:13.187919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:50.301 [2024-11-27 21:42:13.189722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.301 [2024-11-27 21:42:13.189858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.301 [2024-11-27 21:42:13.189918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:50.301 [2024-11-27 21:42:13.190111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:09:50.301 [2024-11-27 21:42:13.190123] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:09:50.301 [2024-11-27 21:42:13.190358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:09:50.301 [2024-11-27 21:42:13.190505] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:09:50.301 [2024-11-27 21:42:13.190529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:09:50.301 [2024-11-27 21:42:13.190671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.301 "name": "raid_bdev1", 00:09:50.301 "uuid": "121e331b-1ef0-4cd0-895b-b33531b03b17", 00:09:50.301 "strip_size_kb": 64, 00:09:50.301 "state": "online", 00:09:50.301 "raid_level": "concat", 00:09:50.301 "superblock": true, 00:09:50.301 "num_base_bdevs": 4, 00:09:50.301 "num_base_bdevs_discovered": 4, 00:09:50.301 "num_base_bdevs_operational": 4, 00:09:50.301 "base_bdevs_list": [ 00:09:50.301 { 00:09:50.301 "name": "BaseBdev1", 00:09:50.301 "uuid": "db0165d9-6899-534f-a0bb-10a03fb995ef", 00:09:50.301 "is_configured": true, 00:09:50.301 "data_offset": 2048, 00:09:50.301 "data_size": 63488 00:09:50.301 }, 00:09:50.301 { 00:09:50.301 "name": "BaseBdev2", 00:09:50.301 "uuid": "fd67fc17-d6f4-5b5e-a989-7dea2c07f97f", 00:09:50.301 "is_configured": true, 00:09:50.301 "data_offset": 2048, 00:09:50.301 "data_size": 63488 00:09:50.301 }, 00:09:50.301 { 00:09:50.301 "name": "BaseBdev3", 00:09:50.301 "uuid": "38344247-3376-5aff-ace0-0214ab8d0a9c", 00:09:50.301 "is_configured": true, 00:09:50.301 "data_offset": 2048, 00:09:50.301 "data_size": 63488 00:09:50.301 }, 00:09:50.301 { 00:09:50.301 "name": "BaseBdev4", 00:09:50.301 "uuid": "a23e17cd-d442-5fb0-a973-6fc3109bb8bd", 00:09:50.301 "is_configured": true, 00:09:50.301 "data_offset": 2048, 00:09:50.301 "data_size": 63488 00:09:50.301 } 00:09:50.301 ] 00:09:50.301 }' 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.301 21:42:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.560 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:50.560 21:42:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:50.819 [2024-11-27 21:42:13.747293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:09:51.758 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:51.758 21:42:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.758 21:42:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.759 "name": "raid_bdev1", 00:09:51.759 "uuid": "121e331b-1ef0-4cd0-895b-b33531b03b17", 00:09:51.759 "strip_size_kb": 64, 00:09:51.759 "state": "online", 00:09:51.759 "raid_level": "concat", 00:09:51.759 "superblock": true, 00:09:51.759 "num_base_bdevs": 4, 00:09:51.759 "num_base_bdevs_discovered": 4, 00:09:51.759 "num_base_bdevs_operational": 4, 00:09:51.759 "base_bdevs_list": [ 00:09:51.759 { 00:09:51.759 "name": "BaseBdev1", 00:09:51.759 "uuid": "db0165d9-6899-534f-a0bb-10a03fb995ef", 00:09:51.759 "is_configured": true, 00:09:51.759 "data_offset": 2048, 00:09:51.759 "data_size": 63488 00:09:51.759 }, 00:09:51.759 { 00:09:51.759 "name": "BaseBdev2", 00:09:51.759 "uuid": "fd67fc17-d6f4-5b5e-a989-7dea2c07f97f", 00:09:51.759 "is_configured": true, 00:09:51.759 "data_offset": 2048, 00:09:51.759 "data_size": 63488 00:09:51.759 }, 00:09:51.759 { 00:09:51.759 "name": "BaseBdev3", 00:09:51.759 "uuid": "38344247-3376-5aff-ace0-0214ab8d0a9c", 00:09:51.759 "is_configured": true, 00:09:51.759 "data_offset": 2048, 00:09:51.759 "data_size": 63488 00:09:51.759 }, 00:09:51.759 { 00:09:51.759 "name": "BaseBdev4", 00:09:51.759 "uuid": "a23e17cd-d442-5fb0-a973-6fc3109bb8bd", 00:09:51.759 "is_configured": true, 00:09:51.759 "data_offset": 2048, 00:09:51.759 "data_size": 63488 00:09:51.759 } 00:09:51.759 ] 00:09:51.759 }' 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.759 21:42:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.020 21:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:52.020 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.020 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.020 [2024-11-27 21:42:15.103084] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.020 [2024-11-27 21:42:15.103153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.020 [2024-11-27 21:42:15.105834] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.020 [2024-11-27 21:42:15.105938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.020 [2024-11-27 21:42:15.106018] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.020 [2024-11-27 21:42:15.106062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:09:52.020 { 00:09:52.020 "results": [ 00:09:52.020 { 00:09:52.020 "job": "raid_bdev1", 00:09:52.020 "core_mask": "0x1", 00:09:52.020 "workload": "randrw", 00:09:52.020 "percentage": 50, 00:09:52.020 "status": "finished", 00:09:52.020 "queue_depth": 1, 00:09:52.020 "io_size": 131072, 00:09:52.020 "runtime": 1.35663, 00:09:52.020 "iops": 16425.259650752232, 00:09:52.020 "mibps": 2053.157456344029, 00:09:52.020 "io_failed": 1, 00:09:52.020 "io_timeout": 0, 00:09:52.020 "avg_latency_us": 84.08580241252461, 00:09:52.020 "min_latency_us": 25.3764192139738, 00:09:52.020 "max_latency_us": 1366.5257641921398 00:09:52.020 } 00:09:52.020 ], 00:09:52.020 "core_count": 1 00:09:52.020 } 00:09:52.020 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.020 21:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 83561 00:09:52.020 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 83561 ']' 00:09:52.020 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 83561 00:09:52.020 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:52.020 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.020 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83561 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83561' 00:09:52.280 killing process with pid 83561 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 83561 00:09:52.280 [2024-11-27 21:42:15.152838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 83561 00:09:52.280 [2024-11-27 21:42:15.187221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wwUZlnVPli 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:52.280 ************************************ 00:09:52.280 END TEST raid_write_error_test 00:09:52.280 ************************************ 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:52.280 00:09:52.280 real 0m3.325s 00:09:52.280 user 0m4.210s 00:09:52.280 sys 0m0.545s 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.280 21:42:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.539 21:42:15 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:52.539 21:42:15 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:09:52.539 21:42:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:52.539 21:42:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.539 21:42:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:52.539 ************************************ 00:09:52.539 START TEST raid_state_function_test 00:09:52.539 ************************************ 00:09:52.539 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:09:52.539 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:52.539 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:52.539 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:52.539 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:52.539 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:52.539 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.539 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:52.539 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.539 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.539 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:52.540 Process raid pid: 83694 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83694 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83694' 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83694 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83694 ']' 00:09:52.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:52.540 21:42:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.540 [2024-11-27 21:42:15.563777] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:09:52.540 [2024-11-27 21:42:15.564006] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.799 [2024-11-27 21:42:15.719286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.799 [2024-11-27 21:42:15.743475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.799 [2024-11-27 21:42:15.785427] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.799 [2024-11-27 21:42:15.785463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.367 [2024-11-27 21:42:16.391884] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.367 [2024-11-27 21:42:16.391961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.367 [2024-11-27 21:42:16.391970] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.367 [2024-11-27 21:42:16.391979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.367 [2024-11-27 21:42:16.391985] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.367 [2024-11-27 21:42:16.391997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.367 [2024-11-27 21:42:16.392003] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:53.367 [2024-11-27 21:42:16.392010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.367 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.368 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.368 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.368 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.368 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.368 "name": "Existed_Raid", 00:09:53.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.368 "strip_size_kb": 0, 00:09:53.368 "state": "configuring", 00:09:53.368 "raid_level": "raid1", 00:09:53.368 "superblock": false, 00:09:53.368 "num_base_bdevs": 4, 00:09:53.368 "num_base_bdevs_discovered": 0, 00:09:53.368 "num_base_bdevs_operational": 4, 00:09:53.368 "base_bdevs_list": [ 00:09:53.368 { 00:09:53.368 "name": "BaseBdev1", 00:09:53.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.368 "is_configured": false, 00:09:53.368 "data_offset": 0, 00:09:53.368 "data_size": 0 00:09:53.368 }, 00:09:53.368 { 00:09:53.368 "name": "BaseBdev2", 00:09:53.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.368 "is_configured": false, 00:09:53.368 "data_offset": 0, 00:09:53.368 "data_size": 0 00:09:53.368 }, 00:09:53.368 { 00:09:53.368 "name": "BaseBdev3", 00:09:53.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.368 "is_configured": false, 00:09:53.368 "data_offset": 0, 00:09:53.368 "data_size": 0 00:09:53.368 }, 00:09:53.368 { 00:09:53.368 "name": "BaseBdev4", 00:09:53.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.368 "is_configured": false, 00:09:53.368 "data_offset": 0, 00:09:53.368 "data_size": 0 00:09:53.368 } 00:09:53.368 ] 00:09:53.368 }' 00:09:53.368 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.368 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.937 [2024-11-27 21:42:16.783126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:53.937 [2024-11-27 21:42:16.783225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.937 [2024-11-27 21:42:16.795099] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.937 [2024-11-27 21:42:16.795170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.937 [2024-11-27 21:42:16.795196] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.937 [2024-11-27 21:42:16.795218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.937 [2024-11-27 21:42:16.795235] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:53.937 [2024-11-27 21:42:16.795255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:53.937 [2024-11-27 21:42:16.795272] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:53.937 [2024-11-27 21:42:16.795292] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.937 [2024-11-27 21:42:16.815741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:53.937 BaseBdev1 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.937 [ 00:09:53.937 { 00:09:53.937 "name": "BaseBdev1", 00:09:53.937 "aliases": [ 00:09:53.937 "c4114435-8388-416f-9c0e-f2b7b7ba000f" 00:09:53.937 ], 00:09:53.937 "product_name": "Malloc disk", 00:09:53.937 "block_size": 512, 00:09:53.937 "num_blocks": 65536, 00:09:53.937 "uuid": "c4114435-8388-416f-9c0e-f2b7b7ba000f", 00:09:53.937 "assigned_rate_limits": { 00:09:53.937 "rw_ios_per_sec": 0, 00:09:53.937 "rw_mbytes_per_sec": 0, 00:09:53.937 "r_mbytes_per_sec": 0, 00:09:53.937 "w_mbytes_per_sec": 0 00:09:53.937 }, 00:09:53.937 "claimed": true, 00:09:53.937 "claim_type": "exclusive_write", 00:09:53.937 "zoned": false, 00:09:53.937 "supported_io_types": { 00:09:53.937 "read": true, 00:09:53.937 "write": true, 00:09:53.937 "unmap": true, 00:09:53.937 "flush": true, 00:09:53.937 "reset": true, 00:09:53.937 "nvme_admin": false, 00:09:53.937 "nvme_io": false, 00:09:53.937 "nvme_io_md": false, 00:09:53.937 "write_zeroes": true, 00:09:53.937 "zcopy": true, 00:09:53.937 "get_zone_info": false, 00:09:53.937 "zone_management": false, 00:09:53.937 "zone_append": false, 00:09:53.937 "compare": false, 00:09:53.937 "compare_and_write": false, 00:09:53.937 "abort": true, 00:09:53.937 "seek_hole": false, 00:09:53.937 "seek_data": false, 00:09:53.937 "copy": true, 00:09:53.937 "nvme_iov_md": false 00:09:53.937 }, 00:09:53.937 "memory_domains": [ 00:09:53.937 { 00:09:53.937 "dma_device_id": "system", 00:09:53.937 "dma_device_type": 1 00:09:53.937 }, 00:09:53.937 { 00:09:53.937 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:53.937 "dma_device_type": 2 00:09:53.937 } 00:09:53.937 ], 00:09:53.937 "driver_specific": {} 00:09:53.937 } 00:09:53.937 ] 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.937 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.937 "name": "Existed_Raid", 00:09:53.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.937 "strip_size_kb": 0, 00:09:53.937 "state": "configuring", 00:09:53.937 "raid_level": "raid1", 00:09:53.937 "superblock": false, 00:09:53.937 "num_base_bdevs": 4, 00:09:53.937 "num_base_bdevs_discovered": 1, 00:09:53.937 "num_base_bdevs_operational": 4, 00:09:53.937 "base_bdevs_list": [ 00:09:53.937 { 00:09:53.937 "name": "BaseBdev1", 00:09:53.937 "uuid": "c4114435-8388-416f-9c0e-f2b7b7ba000f", 00:09:53.937 "is_configured": true, 00:09:53.937 "data_offset": 0, 00:09:53.937 "data_size": 65536 00:09:53.937 }, 00:09:53.937 { 00:09:53.937 "name": "BaseBdev2", 00:09:53.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.937 "is_configured": false, 00:09:53.937 "data_offset": 0, 00:09:53.937 "data_size": 0 00:09:53.937 }, 00:09:53.937 { 00:09:53.938 "name": "BaseBdev3", 00:09:53.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.938 "is_configured": false, 00:09:53.938 "data_offset": 0, 00:09:53.938 "data_size": 0 00:09:53.938 }, 00:09:53.938 { 00:09:53.938 "name": "BaseBdev4", 00:09:53.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.938 "is_configured": false, 00:09:53.938 "data_offset": 0, 00:09:53.938 "data_size": 0 00:09:53.938 } 00:09:53.938 ] 00:09:53.938 }' 00:09:53.938 21:42:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.938 21:42:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.504 [2024-11-27 21:42:17.322954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.504 [2024-11-27 21:42:17.323045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.504 [2024-11-27 21:42:17.334949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.504 [2024-11-27 21:42:17.336887] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.504 [2024-11-27 21:42:17.336960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.504 [2024-11-27 21:42:17.337002] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.504 [2024-11-27 21:42:17.337038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.504 [2024-11-27 21:42:17.337069] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:54.504 [2024-11-27 21:42:17.337122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.504 "name": "Existed_Raid", 00:09:54.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.504 "strip_size_kb": 0, 00:09:54.504 "state": "configuring", 00:09:54.504 "raid_level": "raid1", 00:09:54.504 "superblock": false, 00:09:54.504 "num_base_bdevs": 4, 00:09:54.504 "num_base_bdevs_discovered": 1, 00:09:54.504 "num_base_bdevs_operational": 4, 00:09:54.504 "base_bdevs_list": [ 00:09:54.504 { 00:09:54.504 "name": "BaseBdev1", 00:09:54.504 "uuid": "c4114435-8388-416f-9c0e-f2b7b7ba000f", 00:09:54.504 "is_configured": true, 00:09:54.504 "data_offset": 0, 00:09:54.504 "data_size": 65536 00:09:54.504 }, 00:09:54.504 { 00:09:54.504 "name": "BaseBdev2", 00:09:54.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.504 "is_configured": false, 00:09:54.504 "data_offset": 0, 00:09:54.504 "data_size": 0 00:09:54.504 }, 00:09:54.504 { 00:09:54.504 "name": "BaseBdev3", 00:09:54.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.504 "is_configured": false, 00:09:54.504 "data_offset": 0, 00:09:54.504 "data_size": 0 00:09:54.504 }, 00:09:54.504 { 00:09:54.504 "name": "BaseBdev4", 00:09:54.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.504 "is_configured": false, 00:09:54.504 "data_offset": 0, 00:09:54.504 "data_size": 0 00:09:54.504 } 00:09:54.504 ] 00:09:54.504 }' 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.504 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.764 [2024-11-27 21:42:17.741001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.764 BaseBdev2 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.764 [ 00:09:54.764 { 00:09:54.764 "name": "BaseBdev2", 00:09:54.764 "aliases": [ 00:09:54.764 "c1f51f38-5813-43a2-9a44-3cee953cb9f6" 00:09:54.764 ], 00:09:54.764 "product_name": "Malloc disk", 00:09:54.764 "block_size": 512, 00:09:54.764 "num_blocks": 65536, 00:09:54.764 "uuid": "c1f51f38-5813-43a2-9a44-3cee953cb9f6", 00:09:54.764 "assigned_rate_limits": { 00:09:54.764 "rw_ios_per_sec": 0, 00:09:54.764 "rw_mbytes_per_sec": 0, 00:09:54.764 "r_mbytes_per_sec": 0, 00:09:54.764 "w_mbytes_per_sec": 0 00:09:54.764 }, 00:09:54.764 "claimed": true, 00:09:54.764 "claim_type": "exclusive_write", 00:09:54.764 "zoned": false, 00:09:54.764 "supported_io_types": { 00:09:54.764 "read": true, 00:09:54.764 "write": true, 00:09:54.764 "unmap": true, 00:09:54.764 "flush": true, 00:09:54.764 "reset": true, 00:09:54.764 "nvme_admin": false, 00:09:54.764 "nvme_io": false, 00:09:54.764 "nvme_io_md": false, 00:09:54.764 "write_zeroes": true, 00:09:54.764 "zcopy": true, 00:09:54.764 "get_zone_info": false, 00:09:54.764 "zone_management": false, 00:09:54.764 "zone_append": false, 00:09:54.764 "compare": false, 00:09:54.764 "compare_and_write": false, 00:09:54.764 "abort": true, 00:09:54.764 "seek_hole": false, 00:09:54.764 "seek_data": false, 00:09:54.764 "copy": true, 00:09:54.764 "nvme_iov_md": false 00:09:54.764 }, 00:09:54.764 "memory_domains": [ 00:09:54.764 { 00:09:54.764 "dma_device_id": "system", 00:09:54.764 "dma_device_type": 1 00:09:54.764 }, 00:09:54.764 { 00:09:54.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.764 "dma_device_type": 2 00:09:54.764 } 00:09:54.764 ], 00:09:54.764 "driver_specific": {} 00:09:54.764 } 00:09:54.764 ] 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.764 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.765 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.765 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.765 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.765 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.765 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.765 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.765 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.765 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.765 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.765 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.765 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.765 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.765 "name": "Existed_Raid", 00:09:54.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.765 "strip_size_kb": 0, 00:09:54.765 "state": "configuring", 00:09:54.765 "raid_level": "raid1", 00:09:54.765 "superblock": false, 00:09:54.765 "num_base_bdevs": 4, 00:09:54.765 "num_base_bdevs_discovered": 2, 00:09:54.765 "num_base_bdevs_operational": 4, 00:09:54.765 "base_bdevs_list": [ 00:09:54.765 { 00:09:54.765 "name": "BaseBdev1", 00:09:54.765 "uuid": "c4114435-8388-416f-9c0e-f2b7b7ba000f", 00:09:54.765 "is_configured": true, 00:09:54.765 "data_offset": 0, 00:09:54.765 "data_size": 65536 00:09:54.765 }, 00:09:54.765 { 00:09:54.765 "name": "BaseBdev2", 00:09:54.765 "uuid": "c1f51f38-5813-43a2-9a44-3cee953cb9f6", 00:09:54.765 "is_configured": true, 00:09:54.765 "data_offset": 0, 00:09:54.765 "data_size": 65536 00:09:54.765 }, 00:09:54.765 { 00:09:54.765 "name": "BaseBdev3", 00:09:54.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.765 "is_configured": false, 00:09:54.765 "data_offset": 0, 00:09:54.765 "data_size": 0 00:09:54.765 }, 00:09:54.765 { 00:09:54.765 "name": "BaseBdev4", 00:09:54.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.765 "is_configured": false, 00:09:54.765 "data_offset": 0, 00:09:54.765 "data_size": 0 00:09:54.765 } 00:09:54.765 ] 00:09:54.765 }' 00:09:54.765 21:42:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.765 21:42:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.335 [2024-11-27 21:42:18.256871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:55.335 BaseBdev3 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.335 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.335 [ 00:09:55.335 { 00:09:55.335 "name": "BaseBdev3", 00:09:55.335 "aliases": [ 00:09:55.335 "3593fe01-a31d-46ef-ae47-c6ab7edccca7" 00:09:55.335 ], 00:09:55.335 "product_name": "Malloc disk", 00:09:55.335 "block_size": 512, 00:09:55.335 "num_blocks": 65536, 00:09:55.335 "uuid": "3593fe01-a31d-46ef-ae47-c6ab7edccca7", 00:09:55.335 "assigned_rate_limits": { 00:09:55.335 "rw_ios_per_sec": 0, 00:09:55.335 "rw_mbytes_per_sec": 0, 00:09:55.335 "r_mbytes_per_sec": 0, 00:09:55.335 "w_mbytes_per_sec": 0 00:09:55.335 }, 00:09:55.335 "claimed": true, 00:09:55.335 "claim_type": "exclusive_write", 00:09:55.335 "zoned": false, 00:09:55.335 "supported_io_types": { 00:09:55.335 "read": true, 00:09:55.335 "write": true, 00:09:55.335 "unmap": true, 00:09:55.335 "flush": true, 00:09:55.335 "reset": true, 00:09:55.335 "nvme_admin": false, 00:09:55.335 "nvme_io": false, 00:09:55.335 "nvme_io_md": false, 00:09:55.335 "write_zeroes": true, 00:09:55.335 "zcopy": true, 00:09:55.335 "get_zone_info": false, 00:09:55.335 "zone_management": false, 00:09:55.335 "zone_append": false, 00:09:55.335 "compare": false, 00:09:55.335 "compare_and_write": false, 00:09:55.335 "abort": true, 00:09:55.335 "seek_hole": false, 00:09:55.335 "seek_data": false, 00:09:55.335 "copy": true, 00:09:55.335 "nvme_iov_md": false 00:09:55.335 }, 00:09:55.335 "memory_domains": [ 00:09:55.335 { 00:09:55.335 "dma_device_id": "system", 00:09:55.335 "dma_device_type": 1 00:09:55.335 }, 00:09:55.336 { 00:09:55.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.336 "dma_device_type": 2 00:09:55.336 } 00:09:55.336 ], 00:09:55.336 "driver_specific": {} 00:09:55.336 } 00:09:55.336 ] 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.336 "name": "Existed_Raid", 00:09:55.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.336 "strip_size_kb": 0, 00:09:55.336 "state": "configuring", 00:09:55.336 "raid_level": "raid1", 00:09:55.336 "superblock": false, 00:09:55.336 "num_base_bdevs": 4, 00:09:55.336 "num_base_bdevs_discovered": 3, 00:09:55.336 "num_base_bdevs_operational": 4, 00:09:55.336 "base_bdevs_list": [ 00:09:55.336 { 00:09:55.336 "name": "BaseBdev1", 00:09:55.336 "uuid": "c4114435-8388-416f-9c0e-f2b7b7ba000f", 00:09:55.336 "is_configured": true, 00:09:55.336 "data_offset": 0, 00:09:55.336 "data_size": 65536 00:09:55.336 }, 00:09:55.336 { 00:09:55.336 "name": "BaseBdev2", 00:09:55.336 "uuid": "c1f51f38-5813-43a2-9a44-3cee953cb9f6", 00:09:55.336 "is_configured": true, 00:09:55.336 "data_offset": 0, 00:09:55.336 "data_size": 65536 00:09:55.336 }, 00:09:55.336 { 00:09:55.336 "name": "BaseBdev3", 00:09:55.336 "uuid": "3593fe01-a31d-46ef-ae47-c6ab7edccca7", 00:09:55.336 "is_configured": true, 00:09:55.336 "data_offset": 0, 00:09:55.336 "data_size": 65536 00:09:55.336 }, 00:09:55.336 { 00:09:55.336 "name": "BaseBdev4", 00:09:55.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.336 "is_configured": false, 00:09:55.336 "data_offset": 0, 00:09:55.336 "data_size": 0 00:09:55.336 } 00:09:55.336 ] 00:09:55.336 }' 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.336 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.905 [2024-11-27 21:42:18.731042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:55.905 [2024-11-27 21:42:18.731098] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:09:55.905 [2024-11-27 21:42:18.731113] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:55.905 [2024-11-27 21:42:18.731376] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:09:55.905 [2024-11-27 21:42:18.731516] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:09:55.905 [2024-11-27 21:42:18.731528] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:09:55.905 [2024-11-27 21:42:18.731736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.905 BaseBdev4 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.905 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.905 [ 00:09:55.905 { 00:09:55.905 "name": "BaseBdev4", 00:09:55.905 "aliases": [ 00:09:55.905 "318932f3-fcb2-49c2-b43f-e1a5641f4caa" 00:09:55.905 ], 00:09:55.905 "product_name": "Malloc disk", 00:09:55.905 "block_size": 512, 00:09:55.905 "num_blocks": 65536, 00:09:55.905 "uuid": "318932f3-fcb2-49c2-b43f-e1a5641f4caa", 00:09:55.905 "assigned_rate_limits": { 00:09:55.905 "rw_ios_per_sec": 0, 00:09:55.905 "rw_mbytes_per_sec": 0, 00:09:55.905 "r_mbytes_per_sec": 0, 00:09:55.905 "w_mbytes_per_sec": 0 00:09:55.905 }, 00:09:55.905 "claimed": true, 00:09:55.905 "claim_type": "exclusive_write", 00:09:55.905 "zoned": false, 00:09:55.905 "supported_io_types": { 00:09:55.906 "read": true, 00:09:55.906 "write": true, 00:09:55.906 "unmap": true, 00:09:55.906 "flush": true, 00:09:55.906 "reset": true, 00:09:55.906 "nvme_admin": false, 00:09:55.906 "nvme_io": false, 00:09:55.906 "nvme_io_md": false, 00:09:55.906 "write_zeroes": true, 00:09:55.906 "zcopy": true, 00:09:55.906 "get_zone_info": false, 00:09:55.906 "zone_management": false, 00:09:55.906 "zone_append": false, 00:09:55.906 "compare": false, 00:09:55.906 "compare_and_write": false, 00:09:55.906 "abort": true, 00:09:55.906 "seek_hole": false, 00:09:55.906 "seek_data": false, 00:09:55.906 "copy": true, 00:09:55.906 "nvme_iov_md": false 00:09:55.906 }, 00:09:55.906 "memory_domains": [ 00:09:55.906 { 00:09:55.906 "dma_device_id": "system", 00:09:55.906 "dma_device_type": 1 00:09:55.906 }, 00:09:55.906 { 00:09:55.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.906 "dma_device_type": 2 00:09:55.906 } 00:09:55.906 ], 00:09:55.906 "driver_specific": {} 00:09:55.906 } 00:09:55.906 ] 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.906 "name": "Existed_Raid", 00:09:55.906 "uuid": "3f6a95db-a890-4eb7-b56e-0c2b9564cd9c", 00:09:55.906 "strip_size_kb": 0, 00:09:55.906 "state": "online", 00:09:55.906 "raid_level": "raid1", 00:09:55.906 "superblock": false, 00:09:55.906 "num_base_bdevs": 4, 00:09:55.906 "num_base_bdevs_discovered": 4, 00:09:55.906 "num_base_bdevs_operational": 4, 00:09:55.906 "base_bdevs_list": [ 00:09:55.906 { 00:09:55.906 "name": "BaseBdev1", 00:09:55.906 "uuid": "c4114435-8388-416f-9c0e-f2b7b7ba000f", 00:09:55.906 "is_configured": true, 00:09:55.906 "data_offset": 0, 00:09:55.906 "data_size": 65536 00:09:55.906 }, 00:09:55.906 { 00:09:55.906 "name": "BaseBdev2", 00:09:55.906 "uuid": "c1f51f38-5813-43a2-9a44-3cee953cb9f6", 00:09:55.906 "is_configured": true, 00:09:55.906 "data_offset": 0, 00:09:55.906 "data_size": 65536 00:09:55.906 }, 00:09:55.906 { 00:09:55.906 "name": "BaseBdev3", 00:09:55.906 "uuid": "3593fe01-a31d-46ef-ae47-c6ab7edccca7", 00:09:55.906 "is_configured": true, 00:09:55.906 "data_offset": 0, 00:09:55.906 "data_size": 65536 00:09:55.906 }, 00:09:55.906 { 00:09:55.906 "name": "BaseBdev4", 00:09:55.906 "uuid": "318932f3-fcb2-49c2-b43f-e1a5641f4caa", 00:09:55.906 "is_configured": true, 00:09:55.906 "data_offset": 0, 00:09:55.906 "data_size": 65536 00:09:55.906 } 00:09:55.906 ] 00:09:55.906 }' 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.906 21:42:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.166 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:56.166 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:56.166 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:56.166 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:56.166 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:56.166 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:56.166 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:56.166 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:56.166 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.166 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.166 [2024-11-27 21:42:19.210585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:56.166 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.166 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:56.166 "name": "Existed_Raid", 00:09:56.166 "aliases": [ 00:09:56.166 "3f6a95db-a890-4eb7-b56e-0c2b9564cd9c" 00:09:56.166 ], 00:09:56.166 "product_name": "Raid Volume", 00:09:56.166 "block_size": 512, 00:09:56.166 "num_blocks": 65536, 00:09:56.166 "uuid": "3f6a95db-a890-4eb7-b56e-0c2b9564cd9c", 00:09:56.166 "assigned_rate_limits": { 00:09:56.166 "rw_ios_per_sec": 0, 00:09:56.166 "rw_mbytes_per_sec": 0, 00:09:56.166 "r_mbytes_per_sec": 0, 00:09:56.166 "w_mbytes_per_sec": 0 00:09:56.166 }, 00:09:56.166 "claimed": false, 00:09:56.166 "zoned": false, 00:09:56.166 "supported_io_types": { 00:09:56.166 "read": true, 00:09:56.166 "write": true, 00:09:56.166 "unmap": false, 00:09:56.166 "flush": false, 00:09:56.166 "reset": true, 00:09:56.166 "nvme_admin": false, 00:09:56.166 "nvme_io": false, 00:09:56.166 "nvme_io_md": false, 00:09:56.166 "write_zeroes": true, 00:09:56.166 "zcopy": false, 00:09:56.166 "get_zone_info": false, 00:09:56.166 "zone_management": false, 00:09:56.166 "zone_append": false, 00:09:56.166 "compare": false, 00:09:56.166 "compare_and_write": false, 00:09:56.166 "abort": false, 00:09:56.166 "seek_hole": false, 00:09:56.166 "seek_data": false, 00:09:56.166 "copy": false, 00:09:56.166 "nvme_iov_md": false 00:09:56.166 }, 00:09:56.166 "memory_domains": [ 00:09:56.166 { 00:09:56.166 "dma_device_id": "system", 00:09:56.166 "dma_device_type": 1 00:09:56.166 }, 00:09:56.166 { 00:09:56.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.166 "dma_device_type": 2 00:09:56.166 }, 00:09:56.166 { 00:09:56.166 "dma_device_id": "system", 00:09:56.166 "dma_device_type": 1 00:09:56.166 }, 00:09:56.166 { 00:09:56.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.166 "dma_device_type": 2 00:09:56.166 }, 00:09:56.166 { 00:09:56.166 "dma_device_id": "system", 00:09:56.166 "dma_device_type": 1 00:09:56.166 }, 00:09:56.166 { 00:09:56.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.166 "dma_device_type": 2 00:09:56.166 }, 00:09:56.166 { 00:09:56.166 "dma_device_id": "system", 00:09:56.166 "dma_device_type": 1 00:09:56.166 }, 00:09:56.166 { 00:09:56.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.166 "dma_device_type": 2 00:09:56.166 } 00:09:56.166 ], 00:09:56.166 "driver_specific": { 00:09:56.166 "raid": { 00:09:56.166 "uuid": "3f6a95db-a890-4eb7-b56e-0c2b9564cd9c", 00:09:56.166 "strip_size_kb": 0, 00:09:56.166 "state": "online", 00:09:56.166 "raid_level": "raid1", 00:09:56.166 "superblock": false, 00:09:56.166 "num_base_bdevs": 4, 00:09:56.166 "num_base_bdevs_discovered": 4, 00:09:56.166 "num_base_bdevs_operational": 4, 00:09:56.166 "base_bdevs_list": [ 00:09:56.166 { 00:09:56.166 "name": "BaseBdev1", 00:09:56.166 "uuid": "c4114435-8388-416f-9c0e-f2b7b7ba000f", 00:09:56.166 "is_configured": true, 00:09:56.166 "data_offset": 0, 00:09:56.166 "data_size": 65536 00:09:56.166 }, 00:09:56.166 { 00:09:56.166 "name": "BaseBdev2", 00:09:56.166 "uuid": "c1f51f38-5813-43a2-9a44-3cee953cb9f6", 00:09:56.166 "is_configured": true, 00:09:56.166 "data_offset": 0, 00:09:56.166 "data_size": 65536 00:09:56.166 }, 00:09:56.166 { 00:09:56.166 "name": "BaseBdev3", 00:09:56.166 "uuid": "3593fe01-a31d-46ef-ae47-c6ab7edccca7", 00:09:56.166 "is_configured": true, 00:09:56.166 "data_offset": 0, 00:09:56.166 "data_size": 65536 00:09:56.166 }, 00:09:56.166 { 00:09:56.166 "name": "BaseBdev4", 00:09:56.166 "uuid": "318932f3-fcb2-49c2-b43f-e1a5641f4caa", 00:09:56.166 "is_configured": true, 00:09:56.166 "data_offset": 0, 00:09:56.166 "data_size": 65536 00:09:56.166 } 00:09:56.166 ] 00:09:56.166 } 00:09:56.166 } 00:09:56.166 }' 00:09:56.166 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:56.426 BaseBdev2 00:09:56.426 BaseBdev3 00:09:56.426 BaseBdev4' 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.426 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.427 [2024-11-27 21:42:19.513794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.427 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.686 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.686 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.686 "name": "Existed_Raid", 00:09:56.686 "uuid": "3f6a95db-a890-4eb7-b56e-0c2b9564cd9c", 00:09:56.686 "strip_size_kb": 0, 00:09:56.686 "state": "online", 00:09:56.686 "raid_level": "raid1", 00:09:56.686 "superblock": false, 00:09:56.686 "num_base_bdevs": 4, 00:09:56.686 "num_base_bdevs_discovered": 3, 00:09:56.686 "num_base_bdevs_operational": 3, 00:09:56.686 "base_bdevs_list": [ 00:09:56.686 { 00:09:56.686 "name": null, 00:09:56.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.686 "is_configured": false, 00:09:56.686 "data_offset": 0, 00:09:56.686 "data_size": 65536 00:09:56.686 }, 00:09:56.686 { 00:09:56.686 "name": "BaseBdev2", 00:09:56.686 "uuid": "c1f51f38-5813-43a2-9a44-3cee953cb9f6", 00:09:56.686 "is_configured": true, 00:09:56.686 "data_offset": 0, 00:09:56.686 "data_size": 65536 00:09:56.686 }, 00:09:56.686 { 00:09:56.686 "name": "BaseBdev3", 00:09:56.686 "uuid": "3593fe01-a31d-46ef-ae47-c6ab7edccca7", 00:09:56.686 "is_configured": true, 00:09:56.686 "data_offset": 0, 00:09:56.686 "data_size": 65536 00:09:56.686 }, 00:09:56.686 { 00:09:56.686 "name": "BaseBdev4", 00:09:56.686 "uuid": "318932f3-fcb2-49c2-b43f-e1a5641f4caa", 00:09:56.686 "is_configured": true, 00:09:56.686 "data_offset": 0, 00:09:56.686 "data_size": 65536 00:09:56.687 } 00:09:56.687 ] 00:09:56.687 }' 00:09:56.687 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.687 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.946 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:56.946 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.946 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.946 21:42:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.946 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.946 21:42:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.946 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.946 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:56.946 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:56.946 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:56.946 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.946 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.946 [2024-11-27 21:42:20.040072] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:56.946 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.946 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:56.946 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:56.946 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:56.946 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.946 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.946 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.207 [2024-11-27 21:42:20.111174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.207 [2024-11-27 21:42:20.182160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:57.207 [2024-11-27 21:42:20.182292] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.207 [2024-11-27 21:42:20.193677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.207 [2024-11-27 21:42:20.193818] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:57.207 [2024-11-27 21:42:20.193836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.207 BaseBdev2 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.207 [ 00:09:57.207 { 00:09:57.207 "name": "BaseBdev2", 00:09:57.207 "aliases": [ 00:09:57.207 "66716aba-920b-4215-83af-9e8aac316589" 00:09:57.207 ], 00:09:57.207 "product_name": "Malloc disk", 00:09:57.207 "block_size": 512, 00:09:57.207 "num_blocks": 65536, 00:09:57.207 "uuid": "66716aba-920b-4215-83af-9e8aac316589", 00:09:57.207 "assigned_rate_limits": { 00:09:57.207 "rw_ios_per_sec": 0, 00:09:57.207 "rw_mbytes_per_sec": 0, 00:09:57.207 "r_mbytes_per_sec": 0, 00:09:57.207 "w_mbytes_per_sec": 0 00:09:57.207 }, 00:09:57.207 "claimed": false, 00:09:57.207 "zoned": false, 00:09:57.207 "supported_io_types": { 00:09:57.207 "read": true, 00:09:57.207 "write": true, 00:09:57.207 "unmap": true, 00:09:57.207 "flush": true, 00:09:57.207 "reset": true, 00:09:57.207 "nvme_admin": false, 00:09:57.207 "nvme_io": false, 00:09:57.207 "nvme_io_md": false, 00:09:57.207 "write_zeroes": true, 00:09:57.207 "zcopy": true, 00:09:57.207 "get_zone_info": false, 00:09:57.207 "zone_management": false, 00:09:57.207 "zone_append": false, 00:09:57.207 "compare": false, 00:09:57.207 "compare_and_write": false, 00:09:57.207 "abort": true, 00:09:57.207 "seek_hole": false, 00:09:57.207 "seek_data": false, 00:09:57.207 "copy": true, 00:09:57.207 "nvme_iov_md": false 00:09:57.207 }, 00:09:57.207 "memory_domains": [ 00:09:57.207 { 00:09:57.207 "dma_device_id": "system", 00:09:57.207 "dma_device_type": 1 00:09:57.207 }, 00:09:57.207 { 00:09:57.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.207 "dma_device_type": 2 00:09:57.207 } 00:09:57.207 ], 00:09:57.207 "driver_specific": {} 00:09:57.207 } 00:09:57.207 ] 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.207 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.208 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:57.208 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.208 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.208 BaseBdev3 00:09:57.208 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.208 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:57.208 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:57.208 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.208 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.208 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.208 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.208 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.208 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.208 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.468 [ 00:09:57.468 { 00:09:57.468 "name": "BaseBdev3", 00:09:57.468 "aliases": [ 00:09:57.468 "58e4802e-d3a8-423f-b5ec-a716ae8002b9" 00:09:57.468 ], 00:09:57.468 "product_name": "Malloc disk", 00:09:57.468 "block_size": 512, 00:09:57.468 "num_blocks": 65536, 00:09:57.468 "uuid": "58e4802e-d3a8-423f-b5ec-a716ae8002b9", 00:09:57.468 "assigned_rate_limits": { 00:09:57.468 "rw_ios_per_sec": 0, 00:09:57.468 "rw_mbytes_per_sec": 0, 00:09:57.468 "r_mbytes_per_sec": 0, 00:09:57.468 "w_mbytes_per_sec": 0 00:09:57.468 }, 00:09:57.468 "claimed": false, 00:09:57.468 "zoned": false, 00:09:57.468 "supported_io_types": { 00:09:57.468 "read": true, 00:09:57.468 "write": true, 00:09:57.468 "unmap": true, 00:09:57.468 "flush": true, 00:09:57.468 "reset": true, 00:09:57.468 "nvme_admin": false, 00:09:57.468 "nvme_io": false, 00:09:57.468 "nvme_io_md": false, 00:09:57.468 "write_zeroes": true, 00:09:57.468 "zcopy": true, 00:09:57.468 "get_zone_info": false, 00:09:57.468 "zone_management": false, 00:09:57.468 "zone_append": false, 00:09:57.468 "compare": false, 00:09:57.468 "compare_and_write": false, 00:09:57.468 "abort": true, 00:09:57.468 "seek_hole": false, 00:09:57.468 "seek_data": false, 00:09:57.468 "copy": true, 00:09:57.468 "nvme_iov_md": false 00:09:57.468 }, 00:09:57.468 "memory_domains": [ 00:09:57.468 { 00:09:57.468 "dma_device_id": "system", 00:09:57.468 "dma_device_type": 1 00:09:57.468 }, 00:09:57.468 { 00:09:57.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.468 "dma_device_type": 2 00:09:57.468 } 00:09:57.468 ], 00:09:57.468 "driver_specific": {} 00:09:57.468 } 00:09:57.468 ] 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.468 BaseBdev4 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.468 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.468 [ 00:09:57.468 { 00:09:57.468 "name": "BaseBdev4", 00:09:57.468 "aliases": [ 00:09:57.468 "a0e4ea53-05aa-4570-b564-0143eaa5955e" 00:09:57.468 ], 00:09:57.468 "product_name": "Malloc disk", 00:09:57.468 "block_size": 512, 00:09:57.468 "num_blocks": 65536, 00:09:57.468 "uuid": "a0e4ea53-05aa-4570-b564-0143eaa5955e", 00:09:57.468 "assigned_rate_limits": { 00:09:57.468 "rw_ios_per_sec": 0, 00:09:57.468 "rw_mbytes_per_sec": 0, 00:09:57.468 "r_mbytes_per_sec": 0, 00:09:57.468 "w_mbytes_per_sec": 0 00:09:57.468 }, 00:09:57.468 "claimed": false, 00:09:57.468 "zoned": false, 00:09:57.468 "supported_io_types": { 00:09:57.468 "read": true, 00:09:57.468 "write": true, 00:09:57.468 "unmap": true, 00:09:57.468 "flush": true, 00:09:57.468 "reset": true, 00:09:57.468 "nvme_admin": false, 00:09:57.468 "nvme_io": false, 00:09:57.468 "nvme_io_md": false, 00:09:57.468 "write_zeroes": true, 00:09:57.468 "zcopy": true, 00:09:57.468 "get_zone_info": false, 00:09:57.468 "zone_management": false, 00:09:57.468 "zone_append": false, 00:09:57.469 "compare": false, 00:09:57.469 "compare_and_write": false, 00:09:57.469 "abort": true, 00:09:57.469 "seek_hole": false, 00:09:57.469 "seek_data": false, 00:09:57.469 "copy": true, 00:09:57.469 "nvme_iov_md": false 00:09:57.469 }, 00:09:57.469 "memory_domains": [ 00:09:57.469 { 00:09:57.469 "dma_device_id": "system", 00:09:57.469 "dma_device_type": 1 00:09:57.469 }, 00:09:57.469 { 00:09:57.469 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.469 "dma_device_type": 2 00:09:57.469 } 00:09:57.469 ], 00:09:57.469 "driver_specific": {} 00:09:57.469 } 00:09:57.469 ] 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.469 [2024-11-27 21:42:20.414111] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.469 [2024-11-27 21:42:20.414190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.469 [2024-11-27 21:42:20.414241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.469 [2024-11-27 21:42:20.416109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:57.469 [2024-11-27 21:42:20.416200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.469 "name": "Existed_Raid", 00:09:57.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.469 "strip_size_kb": 0, 00:09:57.469 "state": "configuring", 00:09:57.469 "raid_level": "raid1", 00:09:57.469 "superblock": false, 00:09:57.469 "num_base_bdevs": 4, 00:09:57.469 "num_base_bdevs_discovered": 3, 00:09:57.469 "num_base_bdevs_operational": 4, 00:09:57.469 "base_bdevs_list": [ 00:09:57.469 { 00:09:57.469 "name": "BaseBdev1", 00:09:57.469 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.469 "is_configured": false, 00:09:57.469 "data_offset": 0, 00:09:57.469 "data_size": 0 00:09:57.469 }, 00:09:57.469 { 00:09:57.469 "name": "BaseBdev2", 00:09:57.469 "uuid": "66716aba-920b-4215-83af-9e8aac316589", 00:09:57.469 "is_configured": true, 00:09:57.469 "data_offset": 0, 00:09:57.469 "data_size": 65536 00:09:57.469 }, 00:09:57.469 { 00:09:57.469 "name": "BaseBdev3", 00:09:57.469 "uuid": "58e4802e-d3a8-423f-b5ec-a716ae8002b9", 00:09:57.469 "is_configured": true, 00:09:57.469 "data_offset": 0, 00:09:57.469 "data_size": 65536 00:09:57.469 }, 00:09:57.469 { 00:09:57.469 "name": "BaseBdev4", 00:09:57.469 "uuid": "a0e4ea53-05aa-4570-b564-0143eaa5955e", 00:09:57.469 "is_configured": true, 00:09:57.469 "data_offset": 0, 00:09:57.469 "data_size": 65536 00:09:57.469 } 00:09:57.469 ] 00:09:57.469 }' 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.469 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.038 [2024-11-27 21:42:20.857368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.038 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.038 "name": "Existed_Raid", 00:09:58.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.038 "strip_size_kb": 0, 00:09:58.038 "state": "configuring", 00:09:58.038 "raid_level": "raid1", 00:09:58.038 "superblock": false, 00:09:58.038 "num_base_bdevs": 4, 00:09:58.038 "num_base_bdevs_discovered": 2, 00:09:58.038 "num_base_bdevs_operational": 4, 00:09:58.038 "base_bdevs_list": [ 00:09:58.038 { 00:09:58.038 "name": "BaseBdev1", 00:09:58.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.038 "is_configured": false, 00:09:58.038 "data_offset": 0, 00:09:58.038 "data_size": 0 00:09:58.038 }, 00:09:58.038 { 00:09:58.038 "name": null, 00:09:58.038 "uuid": "66716aba-920b-4215-83af-9e8aac316589", 00:09:58.038 "is_configured": false, 00:09:58.038 "data_offset": 0, 00:09:58.038 "data_size": 65536 00:09:58.038 }, 00:09:58.038 { 00:09:58.038 "name": "BaseBdev3", 00:09:58.038 "uuid": "58e4802e-d3a8-423f-b5ec-a716ae8002b9", 00:09:58.038 "is_configured": true, 00:09:58.038 "data_offset": 0, 00:09:58.038 "data_size": 65536 00:09:58.038 }, 00:09:58.038 { 00:09:58.038 "name": "BaseBdev4", 00:09:58.038 "uuid": "a0e4ea53-05aa-4570-b564-0143eaa5955e", 00:09:58.038 "is_configured": true, 00:09:58.038 "data_offset": 0, 00:09:58.038 "data_size": 65536 00:09:58.038 } 00:09:58.038 ] 00:09:58.038 }' 00:09:58.039 21:42:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.039 21:42:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.298 [2024-11-27 21:42:21.355283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.298 BaseBdev1 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.298 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.299 [ 00:09:58.299 { 00:09:58.299 "name": "BaseBdev1", 00:09:58.299 "aliases": [ 00:09:58.299 "8e912a66-10f5-4497-99ea-9f1ad564573f" 00:09:58.299 ], 00:09:58.299 "product_name": "Malloc disk", 00:09:58.299 "block_size": 512, 00:09:58.299 "num_blocks": 65536, 00:09:58.299 "uuid": "8e912a66-10f5-4497-99ea-9f1ad564573f", 00:09:58.299 "assigned_rate_limits": { 00:09:58.299 "rw_ios_per_sec": 0, 00:09:58.299 "rw_mbytes_per_sec": 0, 00:09:58.299 "r_mbytes_per_sec": 0, 00:09:58.299 "w_mbytes_per_sec": 0 00:09:58.299 }, 00:09:58.299 "claimed": true, 00:09:58.299 "claim_type": "exclusive_write", 00:09:58.299 "zoned": false, 00:09:58.299 "supported_io_types": { 00:09:58.299 "read": true, 00:09:58.299 "write": true, 00:09:58.299 "unmap": true, 00:09:58.299 "flush": true, 00:09:58.299 "reset": true, 00:09:58.299 "nvme_admin": false, 00:09:58.299 "nvme_io": false, 00:09:58.299 "nvme_io_md": false, 00:09:58.299 "write_zeroes": true, 00:09:58.299 "zcopy": true, 00:09:58.299 "get_zone_info": false, 00:09:58.299 "zone_management": false, 00:09:58.299 "zone_append": false, 00:09:58.299 "compare": false, 00:09:58.299 "compare_and_write": false, 00:09:58.299 "abort": true, 00:09:58.299 "seek_hole": false, 00:09:58.299 "seek_data": false, 00:09:58.299 "copy": true, 00:09:58.299 "nvme_iov_md": false 00:09:58.299 }, 00:09:58.299 "memory_domains": [ 00:09:58.299 { 00:09:58.299 "dma_device_id": "system", 00:09:58.299 "dma_device_type": 1 00:09:58.299 }, 00:09:58.299 { 00:09:58.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.299 "dma_device_type": 2 00:09:58.299 } 00:09:58.299 ], 00:09:58.299 "driver_specific": {} 00:09:58.299 } 00:09:58.299 ] 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.299 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.558 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.558 "name": "Existed_Raid", 00:09:58.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.558 "strip_size_kb": 0, 00:09:58.558 "state": "configuring", 00:09:58.558 "raid_level": "raid1", 00:09:58.558 "superblock": false, 00:09:58.558 "num_base_bdevs": 4, 00:09:58.558 "num_base_bdevs_discovered": 3, 00:09:58.558 "num_base_bdevs_operational": 4, 00:09:58.558 "base_bdevs_list": [ 00:09:58.558 { 00:09:58.558 "name": "BaseBdev1", 00:09:58.558 "uuid": "8e912a66-10f5-4497-99ea-9f1ad564573f", 00:09:58.558 "is_configured": true, 00:09:58.558 "data_offset": 0, 00:09:58.558 "data_size": 65536 00:09:58.558 }, 00:09:58.558 { 00:09:58.559 "name": null, 00:09:58.559 "uuid": "66716aba-920b-4215-83af-9e8aac316589", 00:09:58.559 "is_configured": false, 00:09:58.559 "data_offset": 0, 00:09:58.559 "data_size": 65536 00:09:58.559 }, 00:09:58.559 { 00:09:58.559 "name": "BaseBdev3", 00:09:58.559 "uuid": "58e4802e-d3a8-423f-b5ec-a716ae8002b9", 00:09:58.559 "is_configured": true, 00:09:58.559 "data_offset": 0, 00:09:58.559 "data_size": 65536 00:09:58.559 }, 00:09:58.559 { 00:09:58.559 "name": "BaseBdev4", 00:09:58.559 "uuid": "a0e4ea53-05aa-4570-b564-0143eaa5955e", 00:09:58.559 "is_configured": true, 00:09:58.559 "data_offset": 0, 00:09:58.559 "data_size": 65536 00:09:58.559 } 00:09:58.559 ] 00:09:58.559 }' 00:09:58.559 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.559 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.819 [2024-11-27 21:42:21.862471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.819 "name": "Existed_Raid", 00:09:58.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.819 "strip_size_kb": 0, 00:09:58.819 "state": "configuring", 00:09:58.819 "raid_level": "raid1", 00:09:58.819 "superblock": false, 00:09:58.819 "num_base_bdevs": 4, 00:09:58.819 "num_base_bdevs_discovered": 2, 00:09:58.819 "num_base_bdevs_operational": 4, 00:09:58.819 "base_bdevs_list": [ 00:09:58.819 { 00:09:58.819 "name": "BaseBdev1", 00:09:58.819 "uuid": "8e912a66-10f5-4497-99ea-9f1ad564573f", 00:09:58.819 "is_configured": true, 00:09:58.819 "data_offset": 0, 00:09:58.819 "data_size": 65536 00:09:58.819 }, 00:09:58.819 { 00:09:58.819 "name": null, 00:09:58.819 "uuid": "66716aba-920b-4215-83af-9e8aac316589", 00:09:58.819 "is_configured": false, 00:09:58.819 "data_offset": 0, 00:09:58.819 "data_size": 65536 00:09:58.819 }, 00:09:58.819 { 00:09:58.819 "name": null, 00:09:58.819 "uuid": "58e4802e-d3a8-423f-b5ec-a716ae8002b9", 00:09:58.819 "is_configured": false, 00:09:58.819 "data_offset": 0, 00:09:58.819 "data_size": 65536 00:09:58.819 }, 00:09:58.819 { 00:09:58.819 "name": "BaseBdev4", 00:09:58.819 "uuid": "a0e4ea53-05aa-4570-b564-0143eaa5955e", 00:09:58.819 "is_configured": true, 00:09:58.819 "data_offset": 0, 00:09:58.819 "data_size": 65536 00:09:58.819 } 00:09:58.819 ] 00:09:58.819 }' 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.819 21:42:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.389 [2024-11-27 21:42:22.309744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.389 "name": "Existed_Raid", 00:09:59.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.389 "strip_size_kb": 0, 00:09:59.389 "state": "configuring", 00:09:59.389 "raid_level": "raid1", 00:09:59.389 "superblock": false, 00:09:59.389 "num_base_bdevs": 4, 00:09:59.389 "num_base_bdevs_discovered": 3, 00:09:59.389 "num_base_bdevs_operational": 4, 00:09:59.389 "base_bdevs_list": [ 00:09:59.389 { 00:09:59.389 "name": "BaseBdev1", 00:09:59.389 "uuid": "8e912a66-10f5-4497-99ea-9f1ad564573f", 00:09:59.389 "is_configured": true, 00:09:59.389 "data_offset": 0, 00:09:59.389 "data_size": 65536 00:09:59.389 }, 00:09:59.389 { 00:09:59.389 "name": null, 00:09:59.389 "uuid": "66716aba-920b-4215-83af-9e8aac316589", 00:09:59.389 "is_configured": false, 00:09:59.389 "data_offset": 0, 00:09:59.389 "data_size": 65536 00:09:59.389 }, 00:09:59.389 { 00:09:59.389 "name": "BaseBdev3", 00:09:59.389 "uuid": "58e4802e-d3a8-423f-b5ec-a716ae8002b9", 00:09:59.389 "is_configured": true, 00:09:59.389 "data_offset": 0, 00:09:59.389 "data_size": 65536 00:09:59.389 }, 00:09:59.389 { 00:09:59.389 "name": "BaseBdev4", 00:09:59.389 "uuid": "a0e4ea53-05aa-4570-b564-0143eaa5955e", 00:09:59.389 "is_configured": true, 00:09:59.389 "data_offset": 0, 00:09:59.389 "data_size": 65536 00:09:59.389 } 00:09:59.389 ] 00:09:59.389 }' 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.389 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.649 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.649 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:59.649 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.649 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.649 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.649 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:59.649 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:59.649 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.649 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.649 [2024-11-27 21:42:22.768993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.909 "name": "Existed_Raid", 00:09:59.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.909 "strip_size_kb": 0, 00:09:59.909 "state": "configuring", 00:09:59.909 "raid_level": "raid1", 00:09:59.909 "superblock": false, 00:09:59.909 "num_base_bdevs": 4, 00:09:59.909 "num_base_bdevs_discovered": 2, 00:09:59.909 "num_base_bdevs_operational": 4, 00:09:59.909 "base_bdevs_list": [ 00:09:59.909 { 00:09:59.909 "name": null, 00:09:59.909 "uuid": "8e912a66-10f5-4497-99ea-9f1ad564573f", 00:09:59.909 "is_configured": false, 00:09:59.909 "data_offset": 0, 00:09:59.909 "data_size": 65536 00:09:59.909 }, 00:09:59.909 { 00:09:59.909 "name": null, 00:09:59.909 "uuid": "66716aba-920b-4215-83af-9e8aac316589", 00:09:59.909 "is_configured": false, 00:09:59.909 "data_offset": 0, 00:09:59.909 "data_size": 65536 00:09:59.909 }, 00:09:59.909 { 00:09:59.909 "name": "BaseBdev3", 00:09:59.909 "uuid": "58e4802e-d3a8-423f-b5ec-a716ae8002b9", 00:09:59.909 "is_configured": true, 00:09:59.909 "data_offset": 0, 00:09:59.909 "data_size": 65536 00:09:59.909 }, 00:09:59.909 { 00:09:59.909 "name": "BaseBdev4", 00:09:59.909 "uuid": "a0e4ea53-05aa-4570-b564-0143eaa5955e", 00:09:59.909 "is_configured": true, 00:09:59.909 "data_offset": 0, 00:09:59.909 "data_size": 65536 00:09:59.909 } 00:09:59.909 ] 00:09:59.909 }' 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.909 21:42:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.169 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.170 [2024-11-27 21:42:23.190739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.170 "name": "Existed_Raid", 00:10:00.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.170 "strip_size_kb": 0, 00:10:00.170 "state": "configuring", 00:10:00.170 "raid_level": "raid1", 00:10:00.170 "superblock": false, 00:10:00.170 "num_base_bdevs": 4, 00:10:00.170 "num_base_bdevs_discovered": 3, 00:10:00.170 "num_base_bdevs_operational": 4, 00:10:00.170 "base_bdevs_list": [ 00:10:00.170 { 00:10:00.170 "name": null, 00:10:00.170 "uuid": "8e912a66-10f5-4497-99ea-9f1ad564573f", 00:10:00.170 "is_configured": false, 00:10:00.170 "data_offset": 0, 00:10:00.170 "data_size": 65536 00:10:00.170 }, 00:10:00.170 { 00:10:00.170 "name": "BaseBdev2", 00:10:00.170 "uuid": "66716aba-920b-4215-83af-9e8aac316589", 00:10:00.170 "is_configured": true, 00:10:00.170 "data_offset": 0, 00:10:00.170 "data_size": 65536 00:10:00.170 }, 00:10:00.170 { 00:10:00.170 "name": "BaseBdev3", 00:10:00.170 "uuid": "58e4802e-d3a8-423f-b5ec-a716ae8002b9", 00:10:00.170 "is_configured": true, 00:10:00.170 "data_offset": 0, 00:10:00.170 "data_size": 65536 00:10:00.170 }, 00:10:00.170 { 00:10:00.170 "name": "BaseBdev4", 00:10:00.170 "uuid": "a0e4ea53-05aa-4570-b564-0143eaa5955e", 00:10:00.170 "is_configured": true, 00:10:00.170 "data_offset": 0, 00:10:00.170 "data_size": 65536 00:10:00.170 } 00:10:00.170 ] 00:10:00.170 }' 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.170 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8e912a66-10f5-4497-99ea-9f1ad564573f 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.739 [2024-11-27 21:42:23.764947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:00.739 [2024-11-27 21:42:23.765059] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:00.739 [2024-11-27 21:42:23.765111] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:00.739 [2024-11-27 21:42:23.765414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:00.739 [2024-11-27 21:42:23.765566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:00.739 [2024-11-27 21:42:23.765577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:00.739 [2024-11-27 21:42:23.765767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.739 NewBaseBdev 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.739 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.740 [ 00:10:00.740 { 00:10:00.740 "name": "NewBaseBdev", 00:10:00.740 "aliases": [ 00:10:00.740 "8e912a66-10f5-4497-99ea-9f1ad564573f" 00:10:00.740 ], 00:10:00.740 "product_name": "Malloc disk", 00:10:00.740 "block_size": 512, 00:10:00.740 "num_blocks": 65536, 00:10:00.740 "uuid": "8e912a66-10f5-4497-99ea-9f1ad564573f", 00:10:00.740 "assigned_rate_limits": { 00:10:00.740 "rw_ios_per_sec": 0, 00:10:00.740 "rw_mbytes_per_sec": 0, 00:10:00.740 "r_mbytes_per_sec": 0, 00:10:00.740 "w_mbytes_per_sec": 0 00:10:00.740 }, 00:10:00.740 "claimed": true, 00:10:00.740 "claim_type": "exclusive_write", 00:10:00.740 "zoned": false, 00:10:00.740 "supported_io_types": { 00:10:00.740 "read": true, 00:10:00.740 "write": true, 00:10:00.740 "unmap": true, 00:10:00.740 "flush": true, 00:10:00.740 "reset": true, 00:10:00.740 "nvme_admin": false, 00:10:00.740 "nvme_io": false, 00:10:00.740 "nvme_io_md": false, 00:10:00.740 "write_zeroes": true, 00:10:00.740 "zcopy": true, 00:10:00.740 "get_zone_info": false, 00:10:00.740 "zone_management": false, 00:10:00.740 "zone_append": false, 00:10:00.740 "compare": false, 00:10:00.740 "compare_and_write": false, 00:10:00.740 "abort": true, 00:10:00.740 "seek_hole": false, 00:10:00.740 "seek_data": false, 00:10:00.740 "copy": true, 00:10:00.740 "nvme_iov_md": false 00:10:00.740 }, 00:10:00.740 "memory_domains": [ 00:10:00.740 { 00:10:00.740 "dma_device_id": "system", 00:10:00.740 "dma_device_type": 1 00:10:00.740 }, 00:10:00.740 { 00:10:00.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.740 "dma_device_type": 2 00:10:00.740 } 00:10:00.740 ], 00:10:00.740 "driver_specific": {} 00:10:00.740 } 00:10:00.740 ] 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.740 "name": "Existed_Raid", 00:10:00.740 "uuid": "cae7cc2b-4401-45b5-bd71-e4688d70214d", 00:10:00.740 "strip_size_kb": 0, 00:10:00.740 "state": "online", 00:10:00.740 "raid_level": "raid1", 00:10:00.740 "superblock": false, 00:10:00.740 "num_base_bdevs": 4, 00:10:00.740 "num_base_bdevs_discovered": 4, 00:10:00.740 "num_base_bdevs_operational": 4, 00:10:00.740 "base_bdevs_list": [ 00:10:00.740 { 00:10:00.740 "name": "NewBaseBdev", 00:10:00.740 "uuid": "8e912a66-10f5-4497-99ea-9f1ad564573f", 00:10:00.740 "is_configured": true, 00:10:00.740 "data_offset": 0, 00:10:00.740 "data_size": 65536 00:10:00.740 }, 00:10:00.740 { 00:10:00.740 "name": "BaseBdev2", 00:10:00.740 "uuid": "66716aba-920b-4215-83af-9e8aac316589", 00:10:00.740 "is_configured": true, 00:10:00.740 "data_offset": 0, 00:10:00.740 "data_size": 65536 00:10:00.740 }, 00:10:00.740 { 00:10:00.740 "name": "BaseBdev3", 00:10:00.740 "uuid": "58e4802e-d3a8-423f-b5ec-a716ae8002b9", 00:10:00.740 "is_configured": true, 00:10:00.740 "data_offset": 0, 00:10:00.740 "data_size": 65536 00:10:00.740 }, 00:10:00.740 { 00:10:00.740 "name": "BaseBdev4", 00:10:00.740 "uuid": "a0e4ea53-05aa-4570-b564-0143eaa5955e", 00:10:00.740 "is_configured": true, 00:10:00.740 "data_offset": 0, 00:10:00.740 "data_size": 65536 00:10:00.740 } 00:10:00.740 ] 00:10:00.740 }' 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.740 21:42:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.321 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.321 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.321 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.321 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.321 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.321 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.321 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.321 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.321 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.321 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.321 [2024-11-27 21:42:24.252558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.321 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.321 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.321 "name": "Existed_Raid", 00:10:01.321 "aliases": [ 00:10:01.321 "cae7cc2b-4401-45b5-bd71-e4688d70214d" 00:10:01.321 ], 00:10:01.321 "product_name": "Raid Volume", 00:10:01.321 "block_size": 512, 00:10:01.321 "num_blocks": 65536, 00:10:01.321 "uuid": "cae7cc2b-4401-45b5-bd71-e4688d70214d", 00:10:01.321 "assigned_rate_limits": { 00:10:01.321 "rw_ios_per_sec": 0, 00:10:01.321 "rw_mbytes_per_sec": 0, 00:10:01.321 "r_mbytes_per_sec": 0, 00:10:01.321 "w_mbytes_per_sec": 0 00:10:01.321 }, 00:10:01.321 "claimed": false, 00:10:01.321 "zoned": false, 00:10:01.321 "supported_io_types": { 00:10:01.321 "read": true, 00:10:01.321 "write": true, 00:10:01.321 "unmap": false, 00:10:01.321 "flush": false, 00:10:01.321 "reset": true, 00:10:01.321 "nvme_admin": false, 00:10:01.321 "nvme_io": false, 00:10:01.321 "nvme_io_md": false, 00:10:01.321 "write_zeroes": true, 00:10:01.321 "zcopy": false, 00:10:01.321 "get_zone_info": false, 00:10:01.321 "zone_management": false, 00:10:01.321 "zone_append": false, 00:10:01.321 "compare": false, 00:10:01.321 "compare_and_write": false, 00:10:01.321 "abort": false, 00:10:01.321 "seek_hole": false, 00:10:01.321 "seek_data": false, 00:10:01.321 "copy": false, 00:10:01.321 "nvme_iov_md": false 00:10:01.321 }, 00:10:01.321 "memory_domains": [ 00:10:01.321 { 00:10:01.321 "dma_device_id": "system", 00:10:01.321 "dma_device_type": 1 00:10:01.321 }, 00:10:01.321 { 00:10:01.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.321 "dma_device_type": 2 00:10:01.321 }, 00:10:01.321 { 00:10:01.321 "dma_device_id": "system", 00:10:01.321 "dma_device_type": 1 00:10:01.321 }, 00:10:01.321 { 00:10:01.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.321 "dma_device_type": 2 00:10:01.321 }, 00:10:01.321 { 00:10:01.321 "dma_device_id": "system", 00:10:01.321 "dma_device_type": 1 00:10:01.321 }, 00:10:01.321 { 00:10:01.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.321 "dma_device_type": 2 00:10:01.321 }, 00:10:01.321 { 00:10:01.321 "dma_device_id": "system", 00:10:01.321 "dma_device_type": 1 00:10:01.321 }, 00:10:01.321 { 00:10:01.321 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.321 "dma_device_type": 2 00:10:01.321 } 00:10:01.321 ], 00:10:01.321 "driver_specific": { 00:10:01.321 "raid": { 00:10:01.321 "uuid": "cae7cc2b-4401-45b5-bd71-e4688d70214d", 00:10:01.321 "strip_size_kb": 0, 00:10:01.321 "state": "online", 00:10:01.321 "raid_level": "raid1", 00:10:01.321 "superblock": false, 00:10:01.321 "num_base_bdevs": 4, 00:10:01.321 "num_base_bdevs_discovered": 4, 00:10:01.321 "num_base_bdevs_operational": 4, 00:10:01.321 "base_bdevs_list": [ 00:10:01.321 { 00:10:01.321 "name": "NewBaseBdev", 00:10:01.321 "uuid": "8e912a66-10f5-4497-99ea-9f1ad564573f", 00:10:01.321 "is_configured": true, 00:10:01.321 "data_offset": 0, 00:10:01.321 "data_size": 65536 00:10:01.321 }, 00:10:01.321 { 00:10:01.321 "name": "BaseBdev2", 00:10:01.321 "uuid": "66716aba-920b-4215-83af-9e8aac316589", 00:10:01.321 "is_configured": true, 00:10:01.321 "data_offset": 0, 00:10:01.322 "data_size": 65536 00:10:01.322 }, 00:10:01.322 { 00:10:01.322 "name": "BaseBdev3", 00:10:01.322 "uuid": "58e4802e-d3a8-423f-b5ec-a716ae8002b9", 00:10:01.322 "is_configured": true, 00:10:01.322 "data_offset": 0, 00:10:01.322 "data_size": 65536 00:10:01.322 }, 00:10:01.322 { 00:10:01.322 "name": "BaseBdev4", 00:10:01.322 "uuid": "a0e4ea53-05aa-4570-b564-0143eaa5955e", 00:10:01.322 "is_configured": true, 00:10:01.322 "data_offset": 0, 00:10:01.322 "data_size": 65536 00:10:01.322 } 00:10:01.322 ] 00:10:01.322 } 00:10:01.322 } 00:10:01.322 }' 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:01.322 BaseBdev2 00:10:01.322 BaseBdev3 00:10:01.322 BaseBdev4' 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.322 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.595 [2024-11-27 21:42:24.575692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.595 [2024-11-27 21:42:24.575720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.595 [2024-11-27 21:42:24.575827] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.595 [2024-11-27 21:42:24.576089] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:01.595 [2024-11-27 21:42:24.576105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83694 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83694 ']' 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83694 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83694 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83694' 00:10:01.595 killing process with pid 83694 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 83694 00:10:01.595 [2024-11-27 21:42:24.626166] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:01.595 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 83694 00:10:01.595 [2024-11-27 21:42:24.666704] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:01.856 00:10:01.856 real 0m9.412s 00:10:01.856 user 0m16.164s 00:10:01.856 sys 0m1.893s 00:10:01.856 ************************************ 00:10:01.856 END TEST raid_state_function_test 00:10:01.856 ************************************ 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.856 21:42:24 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:10:01.856 21:42:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:01.856 21:42:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.856 21:42:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:01.856 ************************************ 00:10:01.856 START TEST raid_state_function_test_sb 00:10:01.856 ************************************ 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=84343 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84343' 00:10:01.856 Process raid pid: 84343 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 84343 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84343 ']' 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.856 21:42:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.116 [2024-11-27 21:42:25.045354] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:10:02.116 [2024-11-27 21:42:25.045558] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.116 [2024-11-27 21:42:25.203031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.116 [2024-11-27 21:42:25.231429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.376 [2024-11-27 21:42:25.274035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.376 [2024-11-27 21:42:25.274153] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.946 [2024-11-27 21:42:25.888756] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.946 [2024-11-27 21:42:25.888826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.946 [2024-11-27 21:42:25.888837] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:02.946 [2024-11-27 21:42:25.888849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:02.946 [2024-11-27 21:42:25.888855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:02.946 [2024-11-27 21:42:25.888867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:02.946 [2024-11-27 21:42:25.888873] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:02.946 [2024-11-27 21:42:25.888882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.946 "name": "Existed_Raid", 00:10:02.946 "uuid": "02399cc7-83ce-4905-bf6b-9ecc2bd202b8", 00:10:02.946 "strip_size_kb": 0, 00:10:02.946 "state": "configuring", 00:10:02.946 "raid_level": "raid1", 00:10:02.946 "superblock": true, 00:10:02.946 "num_base_bdevs": 4, 00:10:02.946 "num_base_bdevs_discovered": 0, 00:10:02.946 "num_base_bdevs_operational": 4, 00:10:02.946 "base_bdevs_list": [ 00:10:02.946 { 00:10:02.946 "name": "BaseBdev1", 00:10:02.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.946 "is_configured": false, 00:10:02.946 "data_offset": 0, 00:10:02.946 "data_size": 0 00:10:02.946 }, 00:10:02.946 { 00:10:02.946 "name": "BaseBdev2", 00:10:02.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.946 "is_configured": false, 00:10:02.946 "data_offset": 0, 00:10:02.946 "data_size": 0 00:10:02.946 }, 00:10:02.946 { 00:10:02.946 "name": "BaseBdev3", 00:10:02.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.946 "is_configured": false, 00:10:02.946 "data_offset": 0, 00:10:02.946 "data_size": 0 00:10:02.946 }, 00:10:02.946 { 00:10:02.946 "name": "BaseBdev4", 00:10:02.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.946 "is_configured": false, 00:10:02.946 "data_offset": 0, 00:10:02.946 "data_size": 0 00:10:02.946 } 00:10:02.946 ] 00:10:02.946 }' 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.946 21:42:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.515 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.515 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.515 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.515 [2024-11-27 21:42:26.359873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.515 [2024-11-27 21:42:26.359914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:10:03.515 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.515 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.515 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.515 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.515 [2024-11-27 21:42:26.371885] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:03.515 [2024-11-27 21:42:26.371929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:03.516 [2024-11-27 21:42:26.371939] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.516 [2024-11-27 21:42:26.371947] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.516 [2024-11-27 21:42:26.371954] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.516 [2024-11-27 21:42:26.371962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.516 [2024-11-27 21:42:26.371968] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:03.516 [2024-11-27 21:42:26.371977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.516 [2024-11-27 21:42:26.392700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.516 BaseBdev1 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.516 [ 00:10:03.516 { 00:10:03.516 "name": "BaseBdev1", 00:10:03.516 "aliases": [ 00:10:03.516 "f5ec83b0-2fa0-4076-a5f1-1a73c7640db1" 00:10:03.516 ], 00:10:03.516 "product_name": "Malloc disk", 00:10:03.516 "block_size": 512, 00:10:03.516 "num_blocks": 65536, 00:10:03.516 "uuid": "f5ec83b0-2fa0-4076-a5f1-1a73c7640db1", 00:10:03.516 "assigned_rate_limits": { 00:10:03.516 "rw_ios_per_sec": 0, 00:10:03.516 "rw_mbytes_per_sec": 0, 00:10:03.516 "r_mbytes_per_sec": 0, 00:10:03.516 "w_mbytes_per_sec": 0 00:10:03.516 }, 00:10:03.516 "claimed": true, 00:10:03.516 "claim_type": "exclusive_write", 00:10:03.516 "zoned": false, 00:10:03.516 "supported_io_types": { 00:10:03.516 "read": true, 00:10:03.516 "write": true, 00:10:03.516 "unmap": true, 00:10:03.516 "flush": true, 00:10:03.516 "reset": true, 00:10:03.516 "nvme_admin": false, 00:10:03.516 "nvme_io": false, 00:10:03.516 "nvme_io_md": false, 00:10:03.516 "write_zeroes": true, 00:10:03.516 "zcopy": true, 00:10:03.516 "get_zone_info": false, 00:10:03.516 "zone_management": false, 00:10:03.516 "zone_append": false, 00:10:03.516 "compare": false, 00:10:03.516 "compare_and_write": false, 00:10:03.516 "abort": true, 00:10:03.516 "seek_hole": false, 00:10:03.516 "seek_data": false, 00:10:03.516 "copy": true, 00:10:03.516 "nvme_iov_md": false 00:10:03.516 }, 00:10:03.516 "memory_domains": [ 00:10:03.516 { 00:10:03.516 "dma_device_id": "system", 00:10:03.516 "dma_device_type": 1 00:10:03.516 }, 00:10:03.516 { 00:10:03.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.516 "dma_device_type": 2 00:10:03.516 } 00:10:03.516 ], 00:10:03.516 "driver_specific": {} 00:10:03.516 } 00:10:03.516 ] 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.516 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.516 "name": "Existed_Raid", 00:10:03.516 "uuid": "c8a9d427-95b4-478c-9fc2-502eb644f06f", 00:10:03.516 "strip_size_kb": 0, 00:10:03.516 "state": "configuring", 00:10:03.516 "raid_level": "raid1", 00:10:03.516 "superblock": true, 00:10:03.516 "num_base_bdevs": 4, 00:10:03.516 "num_base_bdevs_discovered": 1, 00:10:03.516 "num_base_bdevs_operational": 4, 00:10:03.516 "base_bdevs_list": [ 00:10:03.516 { 00:10:03.516 "name": "BaseBdev1", 00:10:03.516 "uuid": "f5ec83b0-2fa0-4076-a5f1-1a73c7640db1", 00:10:03.516 "is_configured": true, 00:10:03.516 "data_offset": 2048, 00:10:03.516 "data_size": 63488 00:10:03.516 }, 00:10:03.516 { 00:10:03.516 "name": "BaseBdev2", 00:10:03.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.516 "is_configured": false, 00:10:03.516 "data_offset": 0, 00:10:03.516 "data_size": 0 00:10:03.516 }, 00:10:03.516 { 00:10:03.516 "name": "BaseBdev3", 00:10:03.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.516 "is_configured": false, 00:10:03.516 "data_offset": 0, 00:10:03.516 "data_size": 0 00:10:03.516 }, 00:10:03.516 { 00:10:03.516 "name": "BaseBdev4", 00:10:03.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.516 "is_configured": false, 00:10:03.516 "data_offset": 0, 00:10:03.516 "data_size": 0 00:10:03.516 } 00:10:03.516 ] 00:10:03.516 }' 00:10:03.517 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.517 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.777 [2024-11-27 21:42:26.840029] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.777 [2024-11-27 21:42:26.840160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.777 [2024-11-27 21:42:26.852064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.777 [2024-11-27 21:42:26.854033] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.777 [2024-11-27 21:42:26.854137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.777 [2024-11-27 21:42:26.854165] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:03.777 [2024-11-27 21:42:26.854187] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:03.777 [2024-11-27 21:42:26.854205] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:03.777 [2024-11-27 21:42:26.854224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.777 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.777 "name": "Existed_Raid", 00:10:03.777 "uuid": "700aecff-bab4-43ca-9096-fb24f81b2d2c", 00:10:03.777 "strip_size_kb": 0, 00:10:03.777 "state": "configuring", 00:10:03.777 "raid_level": "raid1", 00:10:03.777 "superblock": true, 00:10:03.777 "num_base_bdevs": 4, 00:10:03.777 "num_base_bdevs_discovered": 1, 00:10:03.777 "num_base_bdevs_operational": 4, 00:10:03.777 "base_bdevs_list": [ 00:10:03.777 { 00:10:03.777 "name": "BaseBdev1", 00:10:03.777 "uuid": "f5ec83b0-2fa0-4076-a5f1-1a73c7640db1", 00:10:03.777 "is_configured": true, 00:10:03.777 "data_offset": 2048, 00:10:03.777 "data_size": 63488 00:10:03.777 }, 00:10:03.777 { 00:10:03.777 "name": "BaseBdev2", 00:10:03.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.777 "is_configured": false, 00:10:03.777 "data_offset": 0, 00:10:03.777 "data_size": 0 00:10:03.777 }, 00:10:03.777 { 00:10:03.777 "name": "BaseBdev3", 00:10:03.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.777 "is_configured": false, 00:10:03.777 "data_offset": 0, 00:10:03.777 "data_size": 0 00:10:03.777 }, 00:10:03.777 { 00:10:03.777 "name": "BaseBdev4", 00:10:03.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.777 "is_configured": false, 00:10:03.777 "data_offset": 0, 00:10:03.777 "data_size": 0 00:10:03.777 } 00:10:03.777 ] 00:10:03.777 }' 00:10:03.778 21:42:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.778 21:42:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.347 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:04.347 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.347 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.347 [2024-11-27 21:42:27.274215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.347 BaseBdev2 00:10:04.347 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.347 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:04.347 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:04.347 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.347 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:04.347 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.347 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.347 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.347 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.347 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.347 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.348 [ 00:10:04.348 { 00:10:04.348 "name": "BaseBdev2", 00:10:04.348 "aliases": [ 00:10:04.348 "0836859d-15f9-4741-8297-1360f8eb81c6" 00:10:04.348 ], 00:10:04.348 "product_name": "Malloc disk", 00:10:04.348 "block_size": 512, 00:10:04.348 "num_blocks": 65536, 00:10:04.348 "uuid": "0836859d-15f9-4741-8297-1360f8eb81c6", 00:10:04.348 "assigned_rate_limits": { 00:10:04.348 "rw_ios_per_sec": 0, 00:10:04.348 "rw_mbytes_per_sec": 0, 00:10:04.348 "r_mbytes_per_sec": 0, 00:10:04.348 "w_mbytes_per_sec": 0 00:10:04.348 }, 00:10:04.348 "claimed": true, 00:10:04.348 "claim_type": "exclusive_write", 00:10:04.348 "zoned": false, 00:10:04.348 "supported_io_types": { 00:10:04.348 "read": true, 00:10:04.348 "write": true, 00:10:04.348 "unmap": true, 00:10:04.348 "flush": true, 00:10:04.348 "reset": true, 00:10:04.348 "nvme_admin": false, 00:10:04.348 "nvme_io": false, 00:10:04.348 "nvme_io_md": false, 00:10:04.348 "write_zeroes": true, 00:10:04.348 "zcopy": true, 00:10:04.348 "get_zone_info": false, 00:10:04.348 "zone_management": false, 00:10:04.348 "zone_append": false, 00:10:04.348 "compare": false, 00:10:04.348 "compare_and_write": false, 00:10:04.348 "abort": true, 00:10:04.348 "seek_hole": false, 00:10:04.348 "seek_data": false, 00:10:04.348 "copy": true, 00:10:04.348 "nvme_iov_md": false 00:10:04.348 }, 00:10:04.348 "memory_domains": [ 00:10:04.348 { 00:10:04.348 "dma_device_id": "system", 00:10:04.348 "dma_device_type": 1 00:10:04.348 }, 00:10:04.348 { 00:10:04.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.348 "dma_device_type": 2 00:10:04.348 } 00:10:04.348 ], 00:10:04.348 "driver_specific": {} 00:10:04.348 } 00:10:04.348 ] 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.348 "name": "Existed_Raid", 00:10:04.348 "uuid": "700aecff-bab4-43ca-9096-fb24f81b2d2c", 00:10:04.348 "strip_size_kb": 0, 00:10:04.348 "state": "configuring", 00:10:04.348 "raid_level": "raid1", 00:10:04.348 "superblock": true, 00:10:04.348 "num_base_bdevs": 4, 00:10:04.348 "num_base_bdevs_discovered": 2, 00:10:04.348 "num_base_bdevs_operational": 4, 00:10:04.348 "base_bdevs_list": [ 00:10:04.348 { 00:10:04.348 "name": "BaseBdev1", 00:10:04.348 "uuid": "f5ec83b0-2fa0-4076-a5f1-1a73c7640db1", 00:10:04.348 "is_configured": true, 00:10:04.348 "data_offset": 2048, 00:10:04.348 "data_size": 63488 00:10:04.348 }, 00:10:04.348 { 00:10:04.348 "name": "BaseBdev2", 00:10:04.348 "uuid": "0836859d-15f9-4741-8297-1360f8eb81c6", 00:10:04.348 "is_configured": true, 00:10:04.348 "data_offset": 2048, 00:10:04.348 "data_size": 63488 00:10:04.348 }, 00:10:04.348 { 00:10:04.348 "name": "BaseBdev3", 00:10:04.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.348 "is_configured": false, 00:10:04.348 "data_offset": 0, 00:10:04.348 "data_size": 0 00:10:04.348 }, 00:10:04.348 { 00:10:04.348 "name": "BaseBdev4", 00:10:04.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.348 "is_configured": false, 00:10:04.348 "data_offset": 0, 00:10:04.348 "data_size": 0 00:10:04.348 } 00:10:04.348 ] 00:10:04.348 }' 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.348 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.918 [2024-11-27 21:42:27.760842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.918 BaseBdev3 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.918 [ 00:10:04.918 { 00:10:04.918 "name": "BaseBdev3", 00:10:04.918 "aliases": [ 00:10:04.918 "c626444a-eb5f-4b86-84a9-93aea4b23996" 00:10:04.918 ], 00:10:04.918 "product_name": "Malloc disk", 00:10:04.918 "block_size": 512, 00:10:04.918 "num_blocks": 65536, 00:10:04.918 "uuid": "c626444a-eb5f-4b86-84a9-93aea4b23996", 00:10:04.918 "assigned_rate_limits": { 00:10:04.918 "rw_ios_per_sec": 0, 00:10:04.918 "rw_mbytes_per_sec": 0, 00:10:04.918 "r_mbytes_per_sec": 0, 00:10:04.918 "w_mbytes_per_sec": 0 00:10:04.918 }, 00:10:04.918 "claimed": true, 00:10:04.918 "claim_type": "exclusive_write", 00:10:04.918 "zoned": false, 00:10:04.918 "supported_io_types": { 00:10:04.918 "read": true, 00:10:04.918 "write": true, 00:10:04.918 "unmap": true, 00:10:04.918 "flush": true, 00:10:04.918 "reset": true, 00:10:04.918 "nvme_admin": false, 00:10:04.918 "nvme_io": false, 00:10:04.918 "nvme_io_md": false, 00:10:04.918 "write_zeroes": true, 00:10:04.918 "zcopy": true, 00:10:04.918 "get_zone_info": false, 00:10:04.918 "zone_management": false, 00:10:04.918 "zone_append": false, 00:10:04.918 "compare": false, 00:10:04.918 "compare_and_write": false, 00:10:04.918 "abort": true, 00:10:04.918 "seek_hole": false, 00:10:04.918 "seek_data": false, 00:10:04.918 "copy": true, 00:10:04.918 "nvme_iov_md": false 00:10:04.918 }, 00:10:04.918 "memory_domains": [ 00:10:04.918 { 00:10:04.918 "dma_device_id": "system", 00:10:04.918 "dma_device_type": 1 00:10:04.918 }, 00:10:04.918 { 00:10:04.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.918 "dma_device_type": 2 00:10:04.918 } 00:10:04.918 ], 00:10:04.918 "driver_specific": {} 00:10:04.918 } 00:10:04.918 ] 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.918 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.918 "name": "Existed_Raid", 00:10:04.918 "uuid": "700aecff-bab4-43ca-9096-fb24f81b2d2c", 00:10:04.918 "strip_size_kb": 0, 00:10:04.918 "state": "configuring", 00:10:04.918 "raid_level": "raid1", 00:10:04.918 "superblock": true, 00:10:04.918 "num_base_bdevs": 4, 00:10:04.918 "num_base_bdevs_discovered": 3, 00:10:04.918 "num_base_bdevs_operational": 4, 00:10:04.918 "base_bdevs_list": [ 00:10:04.918 { 00:10:04.918 "name": "BaseBdev1", 00:10:04.918 "uuid": "f5ec83b0-2fa0-4076-a5f1-1a73c7640db1", 00:10:04.918 "is_configured": true, 00:10:04.918 "data_offset": 2048, 00:10:04.918 "data_size": 63488 00:10:04.918 }, 00:10:04.918 { 00:10:04.918 "name": "BaseBdev2", 00:10:04.918 "uuid": "0836859d-15f9-4741-8297-1360f8eb81c6", 00:10:04.919 "is_configured": true, 00:10:04.919 "data_offset": 2048, 00:10:04.919 "data_size": 63488 00:10:04.919 }, 00:10:04.919 { 00:10:04.919 "name": "BaseBdev3", 00:10:04.919 "uuid": "c626444a-eb5f-4b86-84a9-93aea4b23996", 00:10:04.919 "is_configured": true, 00:10:04.919 "data_offset": 2048, 00:10:04.919 "data_size": 63488 00:10:04.919 }, 00:10:04.919 { 00:10:04.919 "name": "BaseBdev4", 00:10:04.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.919 "is_configured": false, 00:10:04.919 "data_offset": 0, 00:10:04.919 "data_size": 0 00:10:04.919 } 00:10:04.919 ] 00:10:04.919 }' 00:10:04.919 21:42:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.919 21:42:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.178 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:05.178 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.178 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.178 [2024-11-27 21:42:28.250871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:05.178 [2024-11-27 21:42:28.251081] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:05.179 [2024-11-27 21:42:28.251096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:05.179 BaseBdev4 00:10:05.179 [2024-11-27 21:42:28.251387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:05.179 [2024-11-27 21:42:28.251537] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:05.179 [2024-11-27 21:42:28.251556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:10:05.179 [2024-11-27 21:42:28.251737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.179 [ 00:10:05.179 { 00:10:05.179 "name": "BaseBdev4", 00:10:05.179 "aliases": [ 00:10:05.179 "9d7de837-d36a-4487-91ef-1b71b031cea3" 00:10:05.179 ], 00:10:05.179 "product_name": "Malloc disk", 00:10:05.179 "block_size": 512, 00:10:05.179 "num_blocks": 65536, 00:10:05.179 "uuid": "9d7de837-d36a-4487-91ef-1b71b031cea3", 00:10:05.179 "assigned_rate_limits": { 00:10:05.179 "rw_ios_per_sec": 0, 00:10:05.179 "rw_mbytes_per_sec": 0, 00:10:05.179 "r_mbytes_per_sec": 0, 00:10:05.179 "w_mbytes_per_sec": 0 00:10:05.179 }, 00:10:05.179 "claimed": true, 00:10:05.179 "claim_type": "exclusive_write", 00:10:05.179 "zoned": false, 00:10:05.179 "supported_io_types": { 00:10:05.179 "read": true, 00:10:05.179 "write": true, 00:10:05.179 "unmap": true, 00:10:05.179 "flush": true, 00:10:05.179 "reset": true, 00:10:05.179 "nvme_admin": false, 00:10:05.179 "nvme_io": false, 00:10:05.179 "nvme_io_md": false, 00:10:05.179 "write_zeroes": true, 00:10:05.179 "zcopy": true, 00:10:05.179 "get_zone_info": false, 00:10:05.179 "zone_management": false, 00:10:05.179 "zone_append": false, 00:10:05.179 "compare": false, 00:10:05.179 "compare_and_write": false, 00:10:05.179 "abort": true, 00:10:05.179 "seek_hole": false, 00:10:05.179 "seek_data": false, 00:10:05.179 "copy": true, 00:10:05.179 "nvme_iov_md": false 00:10:05.179 }, 00:10:05.179 "memory_domains": [ 00:10:05.179 { 00:10:05.179 "dma_device_id": "system", 00:10:05.179 "dma_device_type": 1 00:10:05.179 }, 00:10:05.179 { 00:10:05.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.179 "dma_device_type": 2 00:10:05.179 } 00:10:05.179 ], 00:10:05.179 "driver_specific": {} 00:10:05.179 } 00:10:05.179 ] 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.179 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.439 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.439 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.439 "name": "Existed_Raid", 00:10:05.439 "uuid": "700aecff-bab4-43ca-9096-fb24f81b2d2c", 00:10:05.439 "strip_size_kb": 0, 00:10:05.439 "state": "online", 00:10:05.439 "raid_level": "raid1", 00:10:05.439 "superblock": true, 00:10:05.439 "num_base_bdevs": 4, 00:10:05.439 "num_base_bdevs_discovered": 4, 00:10:05.439 "num_base_bdevs_operational": 4, 00:10:05.439 "base_bdevs_list": [ 00:10:05.439 { 00:10:05.439 "name": "BaseBdev1", 00:10:05.439 "uuid": "f5ec83b0-2fa0-4076-a5f1-1a73c7640db1", 00:10:05.439 "is_configured": true, 00:10:05.439 "data_offset": 2048, 00:10:05.439 "data_size": 63488 00:10:05.439 }, 00:10:05.439 { 00:10:05.439 "name": "BaseBdev2", 00:10:05.439 "uuid": "0836859d-15f9-4741-8297-1360f8eb81c6", 00:10:05.439 "is_configured": true, 00:10:05.439 "data_offset": 2048, 00:10:05.439 "data_size": 63488 00:10:05.439 }, 00:10:05.439 { 00:10:05.439 "name": "BaseBdev3", 00:10:05.439 "uuid": "c626444a-eb5f-4b86-84a9-93aea4b23996", 00:10:05.439 "is_configured": true, 00:10:05.439 "data_offset": 2048, 00:10:05.439 "data_size": 63488 00:10:05.439 }, 00:10:05.439 { 00:10:05.439 "name": "BaseBdev4", 00:10:05.439 "uuid": "9d7de837-d36a-4487-91ef-1b71b031cea3", 00:10:05.439 "is_configured": true, 00:10:05.439 "data_offset": 2048, 00:10:05.439 "data_size": 63488 00:10:05.439 } 00:10:05.439 ] 00:10:05.439 }' 00:10:05.439 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.439 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.698 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:05.698 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:05.698 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:05.698 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:05.698 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:05.698 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:05.698 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:05.698 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:05.698 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.698 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.698 [2024-11-27 21:42:28.730430] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:05.698 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.698 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:05.698 "name": "Existed_Raid", 00:10:05.698 "aliases": [ 00:10:05.698 "700aecff-bab4-43ca-9096-fb24f81b2d2c" 00:10:05.698 ], 00:10:05.698 "product_name": "Raid Volume", 00:10:05.698 "block_size": 512, 00:10:05.698 "num_blocks": 63488, 00:10:05.698 "uuid": "700aecff-bab4-43ca-9096-fb24f81b2d2c", 00:10:05.698 "assigned_rate_limits": { 00:10:05.698 "rw_ios_per_sec": 0, 00:10:05.699 "rw_mbytes_per_sec": 0, 00:10:05.699 "r_mbytes_per_sec": 0, 00:10:05.699 "w_mbytes_per_sec": 0 00:10:05.699 }, 00:10:05.699 "claimed": false, 00:10:05.699 "zoned": false, 00:10:05.699 "supported_io_types": { 00:10:05.699 "read": true, 00:10:05.699 "write": true, 00:10:05.699 "unmap": false, 00:10:05.699 "flush": false, 00:10:05.699 "reset": true, 00:10:05.699 "nvme_admin": false, 00:10:05.699 "nvme_io": false, 00:10:05.699 "nvme_io_md": false, 00:10:05.699 "write_zeroes": true, 00:10:05.699 "zcopy": false, 00:10:05.699 "get_zone_info": false, 00:10:05.699 "zone_management": false, 00:10:05.699 "zone_append": false, 00:10:05.699 "compare": false, 00:10:05.699 "compare_and_write": false, 00:10:05.699 "abort": false, 00:10:05.699 "seek_hole": false, 00:10:05.699 "seek_data": false, 00:10:05.699 "copy": false, 00:10:05.699 "nvme_iov_md": false 00:10:05.699 }, 00:10:05.699 "memory_domains": [ 00:10:05.699 { 00:10:05.699 "dma_device_id": "system", 00:10:05.699 "dma_device_type": 1 00:10:05.699 }, 00:10:05.699 { 00:10:05.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.699 "dma_device_type": 2 00:10:05.699 }, 00:10:05.699 { 00:10:05.699 "dma_device_id": "system", 00:10:05.699 "dma_device_type": 1 00:10:05.699 }, 00:10:05.699 { 00:10:05.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.699 "dma_device_type": 2 00:10:05.699 }, 00:10:05.699 { 00:10:05.699 "dma_device_id": "system", 00:10:05.699 "dma_device_type": 1 00:10:05.699 }, 00:10:05.699 { 00:10:05.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.699 "dma_device_type": 2 00:10:05.699 }, 00:10:05.699 { 00:10:05.699 "dma_device_id": "system", 00:10:05.699 "dma_device_type": 1 00:10:05.699 }, 00:10:05.699 { 00:10:05.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:05.699 "dma_device_type": 2 00:10:05.699 } 00:10:05.699 ], 00:10:05.699 "driver_specific": { 00:10:05.699 "raid": { 00:10:05.699 "uuid": "700aecff-bab4-43ca-9096-fb24f81b2d2c", 00:10:05.699 "strip_size_kb": 0, 00:10:05.699 "state": "online", 00:10:05.699 "raid_level": "raid1", 00:10:05.699 "superblock": true, 00:10:05.699 "num_base_bdevs": 4, 00:10:05.699 "num_base_bdevs_discovered": 4, 00:10:05.699 "num_base_bdevs_operational": 4, 00:10:05.699 "base_bdevs_list": [ 00:10:05.699 { 00:10:05.699 "name": "BaseBdev1", 00:10:05.699 "uuid": "f5ec83b0-2fa0-4076-a5f1-1a73c7640db1", 00:10:05.699 "is_configured": true, 00:10:05.699 "data_offset": 2048, 00:10:05.699 "data_size": 63488 00:10:05.699 }, 00:10:05.699 { 00:10:05.699 "name": "BaseBdev2", 00:10:05.699 "uuid": "0836859d-15f9-4741-8297-1360f8eb81c6", 00:10:05.699 "is_configured": true, 00:10:05.699 "data_offset": 2048, 00:10:05.699 "data_size": 63488 00:10:05.699 }, 00:10:05.699 { 00:10:05.699 "name": "BaseBdev3", 00:10:05.699 "uuid": "c626444a-eb5f-4b86-84a9-93aea4b23996", 00:10:05.699 "is_configured": true, 00:10:05.699 "data_offset": 2048, 00:10:05.699 "data_size": 63488 00:10:05.699 }, 00:10:05.699 { 00:10:05.699 "name": "BaseBdev4", 00:10:05.699 "uuid": "9d7de837-d36a-4487-91ef-1b71b031cea3", 00:10:05.699 "is_configured": true, 00:10:05.699 "data_offset": 2048, 00:10:05.699 "data_size": 63488 00:10:05.699 } 00:10:05.699 ] 00:10:05.699 } 00:10:05.699 } 00:10:05.699 }' 00:10:05.699 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:05.699 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:05.699 BaseBdev2 00:10:05.699 BaseBdev3 00:10:05.699 BaseBdev4' 00:10:05.699 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.959 21:42:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.959 [2024-11-27 21:42:29.049608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.959 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.217 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.217 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.217 "name": "Existed_Raid", 00:10:06.217 "uuid": "700aecff-bab4-43ca-9096-fb24f81b2d2c", 00:10:06.217 "strip_size_kb": 0, 00:10:06.217 "state": "online", 00:10:06.217 "raid_level": "raid1", 00:10:06.217 "superblock": true, 00:10:06.217 "num_base_bdevs": 4, 00:10:06.217 "num_base_bdevs_discovered": 3, 00:10:06.217 "num_base_bdevs_operational": 3, 00:10:06.217 "base_bdevs_list": [ 00:10:06.217 { 00:10:06.217 "name": null, 00:10:06.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.217 "is_configured": false, 00:10:06.217 "data_offset": 0, 00:10:06.217 "data_size": 63488 00:10:06.217 }, 00:10:06.217 { 00:10:06.217 "name": "BaseBdev2", 00:10:06.217 "uuid": "0836859d-15f9-4741-8297-1360f8eb81c6", 00:10:06.217 "is_configured": true, 00:10:06.217 "data_offset": 2048, 00:10:06.217 "data_size": 63488 00:10:06.217 }, 00:10:06.217 { 00:10:06.217 "name": "BaseBdev3", 00:10:06.217 "uuid": "c626444a-eb5f-4b86-84a9-93aea4b23996", 00:10:06.217 "is_configured": true, 00:10:06.217 "data_offset": 2048, 00:10:06.217 "data_size": 63488 00:10:06.217 }, 00:10:06.217 { 00:10:06.217 "name": "BaseBdev4", 00:10:06.217 "uuid": "9d7de837-d36a-4487-91ef-1b71b031cea3", 00:10:06.217 "is_configured": true, 00:10:06.217 "data_offset": 2048, 00:10:06.217 "data_size": 63488 00:10:06.217 } 00:10:06.217 ] 00:10:06.217 }' 00:10:06.217 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.217 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.476 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:06.476 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.476 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.476 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.476 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.476 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.476 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.736 [2024-11-27 21:42:29.603831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.736 [2024-11-27 21:42:29.674969] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.736 [2024-11-27 21:42:29.746173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:06.736 [2024-11-27 21:42:29.746330] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.736 [2024-11-27 21:42:29.757866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.736 [2024-11-27 21:42:29.757931] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.736 [2024-11-27 21:42:29.757943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.736 BaseBdev2 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.736 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.736 [ 00:10:06.736 { 00:10:06.736 "name": "BaseBdev2", 00:10:06.736 "aliases": [ 00:10:06.736 "3efddf3f-b2d9-463e-81bb-411e5522c54f" 00:10:06.736 ], 00:10:06.736 "product_name": "Malloc disk", 00:10:06.736 "block_size": 512, 00:10:06.736 "num_blocks": 65536, 00:10:06.736 "uuid": "3efddf3f-b2d9-463e-81bb-411e5522c54f", 00:10:06.736 "assigned_rate_limits": { 00:10:06.736 "rw_ios_per_sec": 0, 00:10:06.736 "rw_mbytes_per_sec": 0, 00:10:06.736 "r_mbytes_per_sec": 0, 00:10:06.736 "w_mbytes_per_sec": 0 00:10:06.736 }, 00:10:06.736 "claimed": false, 00:10:06.736 "zoned": false, 00:10:06.736 "supported_io_types": { 00:10:06.736 "read": true, 00:10:06.736 "write": true, 00:10:06.736 "unmap": true, 00:10:06.736 "flush": true, 00:10:06.736 "reset": true, 00:10:06.996 "nvme_admin": false, 00:10:06.996 "nvme_io": false, 00:10:06.996 "nvme_io_md": false, 00:10:06.996 "write_zeroes": true, 00:10:06.996 "zcopy": true, 00:10:06.996 "get_zone_info": false, 00:10:06.996 "zone_management": false, 00:10:06.996 "zone_append": false, 00:10:06.996 "compare": false, 00:10:06.997 "compare_and_write": false, 00:10:06.997 "abort": true, 00:10:06.997 "seek_hole": false, 00:10:06.997 "seek_data": false, 00:10:06.997 "copy": true, 00:10:06.997 "nvme_iov_md": false 00:10:06.997 }, 00:10:06.997 "memory_domains": [ 00:10:06.997 { 00:10:06.997 "dma_device_id": "system", 00:10:06.997 "dma_device_type": 1 00:10:06.997 }, 00:10:06.997 { 00:10:06.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.997 "dma_device_type": 2 00:10:06.997 } 00:10:06.997 ], 00:10:06.997 "driver_specific": {} 00:10:06.997 } 00:10:06.997 ] 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.997 BaseBdev3 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.997 [ 00:10:06.997 { 00:10:06.997 "name": "BaseBdev3", 00:10:06.997 "aliases": [ 00:10:06.997 "879acac5-7f84-452d-9cac-97507ea7aaea" 00:10:06.997 ], 00:10:06.997 "product_name": "Malloc disk", 00:10:06.997 "block_size": 512, 00:10:06.997 "num_blocks": 65536, 00:10:06.997 "uuid": "879acac5-7f84-452d-9cac-97507ea7aaea", 00:10:06.997 "assigned_rate_limits": { 00:10:06.997 "rw_ios_per_sec": 0, 00:10:06.997 "rw_mbytes_per_sec": 0, 00:10:06.997 "r_mbytes_per_sec": 0, 00:10:06.997 "w_mbytes_per_sec": 0 00:10:06.997 }, 00:10:06.997 "claimed": false, 00:10:06.997 "zoned": false, 00:10:06.997 "supported_io_types": { 00:10:06.997 "read": true, 00:10:06.997 "write": true, 00:10:06.997 "unmap": true, 00:10:06.997 "flush": true, 00:10:06.997 "reset": true, 00:10:06.997 "nvme_admin": false, 00:10:06.997 "nvme_io": false, 00:10:06.997 "nvme_io_md": false, 00:10:06.997 "write_zeroes": true, 00:10:06.997 "zcopy": true, 00:10:06.997 "get_zone_info": false, 00:10:06.997 "zone_management": false, 00:10:06.997 "zone_append": false, 00:10:06.997 "compare": false, 00:10:06.997 "compare_and_write": false, 00:10:06.997 "abort": true, 00:10:06.997 "seek_hole": false, 00:10:06.997 "seek_data": false, 00:10:06.997 "copy": true, 00:10:06.997 "nvme_iov_md": false 00:10:06.997 }, 00:10:06.997 "memory_domains": [ 00:10:06.997 { 00:10:06.997 "dma_device_id": "system", 00:10:06.997 "dma_device_type": 1 00:10:06.997 }, 00:10:06.997 { 00:10:06.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.997 "dma_device_type": 2 00:10:06.997 } 00:10:06.997 ], 00:10:06.997 "driver_specific": {} 00:10:06.997 } 00:10:06.997 ] 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.997 BaseBdev4 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.997 [ 00:10:06.997 { 00:10:06.997 "name": "BaseBdev4", 00:10:06.997 "aliases": [ 00:10:06.997 "36d7247a-2eeb-43ce-848e-477e4deda49b" 00:10:06.997 ], 00:10:06.997 "product_name": "Malloc disk", 00:10:06.997 "block_size": 512, 00:10:06.997 "num_blocks": 65536, 00:10:06.997 "uuid": "36d7247a-2eeb-43ce-848e-477e4deda49b", 00:10:06.997 "assigned_rate_limits": { 00:10:06.997 "rw_ios_per_sec": 0, 00:10:06.997 "rw_mbytes_per_sec": 0, 00:10:06.997 "r_mbytes_per_sec": 0, 00:10:06.997 "w_mbytes_per_sec": 0 00:10:06.997 }, 00:10:06.997 "claimed": false, 00:10:06.997 "zoned": false, 00:10:06.997 "supported_io_types": { 00:10:06.997 "read": true, 00:10:06.997 "write": true, 00:10:06.997 "unmap": true, 00:10:06.997 "flush": true, 00:10:06.997 "reset": true, 00:10:06.997 "nvme_admin": false, 00:10:06.997 "nvme_io": false, 00:10:06.997 "nvme_io_md": false, 00:10:06.997 "write_zeroes": true, 00:10:06.997 "zcopy": true, 00:10:06.997 "get_zone_info": false, 00:10:06.997 "zone_management": false, 00:10:06.997 "zone_append": false, 00:10:06.997 "compare": false, 00:10:06.997 "compare_and_write": false, 00:10:06.997 "abort": true, 00:10:06.997 "seek_hole": false, 00:10:06.997 "seek_data": false, 00:10:06.997 "copy": true, 00:10:06.997 "nvme_iov_md": false 00:10:06.997 }, 00:10:06.997 "memory_domains": [ 00:10:06.997 { 00:10:06.997 "dma_device_id": "system", 00:10:06.997 "dma_device_type": 1 00:10:06.997 }, 00:10:06.997 { 00:10:06.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.997 "dma_device_type": 2 00:10:06.997 } 00:10:06.997 ], 00:10:06.997 "driver_specific": {} 00:10:06.997 } 00:10:06.997 ] 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.997 [2024-11-27 21:42:29.978467] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.997 [2024-11-27 21:42:29.978561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.997 [2024-11-27 21:42:29.978598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.997 [2024-11-27 21:42:29.980393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.997 [2024-11-27 21:42:29.980481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.997 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.998 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.998 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.998 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.998 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.998 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.998 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.998 21:42:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.998 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.998 21:42:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.998 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.998 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.998 "name": "Existed_Raid", 00:10:06.998 "uuid": "b5ee3269-ae23-41e6-b1c1-aaa9f724b825", 00:10:06.998 "strip_size_kb": 0, 00:10:06.998 "state": "configuring", 00:10:06.998 "raid_level": "raid1", 00:10:06.998 "superblock": true, 00:10:06.998 "num_base_bdevs": 4, 00:10:06.998 "num_base_bdevs_discovered": 3, 00:10:06.998 "num_base_bdevs_operational": 4, 00:10:06.998 "base_bdevs_list": [ 00:10:06.998 { 00:10:06.998 "name": "BaseBdev1", 00:10:06.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.998 "is_configured": false, 00:10:06.998 "data_offset": 0, 00:10:06.998 "data_size": 0 00:10:06.998 }, 00:10:06.998 { 00:10:06.998 "name": "BaseBdev2", 00:10:06.998 "uuid": "3efddf3f-b2d9-463e-81bb-411e5522c54f", 00:10:06.998 "is_configured": true, 00:10:06.998 "data_offset": 2048, 00:10:06.998 "data_size": 63488 00:10:06.998 }, 00:10:06.998 { 00:10:06.998 "name": "BaseBdev3", 00:10:06.998 "uuid": "879acac5-7f84-452d-9cac-97507ea7aaea", 00:10:06.998 "is_configured": true, 00:10:06.998 "data_offset": 2048, 00:10:06.998 "data_size": 63488 00:10:06.998 }, 00:10:06.998 { 00:10:06.998 "name": "BaseBdev4", 00:10:06.998 "uuid": "36d7247a-2eeb-43ce-848e-477e4deda49b", 00:10:06.998 "is_configured": true, 00:10:06.998 "data_offset": 2048, 00:10:06.998 "data_size": 63488 00:10:06.998 } 00:10:06.998 ] 00:10:06.998 }' 00:10:06.998 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.998 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.566 [2024-11-27 21:42:30.421754] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.566 "name": "Existed_Raid", 00:10:07.566 "uuid": "b5ee3269-ae23-41e6-b1c1-aaa9f724b825", 00:10:07.566 "strip_size_kb": 0, 00:10:07.566 "state": "configuring", 00:10:07.566 "raid_level": "raid1", 00:10:07.566 "superblock": true, 00:10:07.566 "num_base_bdevs": 4, 00:10:07.566 "num_base_bdevs_discovered": 2, 00:10:07.566 "num_base_bdevs_operational": 4, 00:10:07.566 "base_bdevs_list": [ 00:10:07.566 { 00:10:07.566 "name": "BaseBdev1", 00:10:07.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.566 "is_configured": false, 00:10:07.566 "data_offset": 0, 00:10:07.566 "data_size": 0 00:10:07.566 }, 00:10:07.566 { 00:10:07.566 "name": null, 00:10:07.566 "uuid": "3efddf3f-b2d9-463e-81bb-411e5522c54f", 00:10:07.566 "is_configured": false, 00:10:07.566 "data_offset": 0, 00:10:07.566 "data_size": 63488 00:10:07.566 }, 00:10:07.566 { 00:10:07.566 "name": "BaseBdev3", 00:10:07.566 "uuid": "879acac5-7f84-452d-9cac-97507ea7aaea", 00:10:07.566 "is_configured": true, 00:10:07.566 "data_offset": 2048, 00:10:07.566 "data_size": 63488 00:10:07.566 }, 00:10:07.566 { 00:10:07.566 "name": "BaseBdev4", 00:10:07.566 "uuid": "36d7247a-2eeb-43ce-848e-477e4deda49b", 00:10:07.566 "is_configured": true, 00:10:07.566 "data_offset": 2048, 00:10:07.566 "data_size": 63488 00:10:07.566 } 00:10:07.566 ] 00:10:07.566 }' 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.566 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.826 [2024-11-27 21:42:30.843820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.826 BaseBdev1 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.826 [ 00:10:07.826 { 00:10:07.826 "name": "BaseBdev1", 00:10:07.826 "aliases": [ 00:10:07.826 "863bd448-4050-4f71-966c-d9c802b46f7e" 00:10:07.826 ], 00:10:07.826 "product_name": "Malloc disk", 00:10:07.826 "block_size": 512, 00:10:07.826 "num_blocks": 65536, 00:10:07.826 "uuid": "863bd448-4050-4f71-966c-d9c802b46f7e", 00:10:07.826 "assigned_rate_limits": { 00:10:07.826 "rw_ios_per_sec": 0, 00:10:07.826 "rw_mbytes_per_sec": 0, 00:10:07.826 "r_mbytes_per_sec": 0, 00:10:07.826 "w_mbytes_per_sec": 0 00:10:07.826 }, 00:10:07.826 "claimed": true, 00:10:07.826 "claim_type": "exclusive_write", 00:10:07.826 "zoned": false, 00:10:07.826 "supported_io_types": { 00:10:07.826 "read": true, 00:10:07.826 "write": true, 00:10:07.826 "unmap": true, 00:10:07.826 "flush": true, 00:10:07.826 "reset": true, 00:10:07.826 "nvme_admin": false, 00:10:07.826 "nvme_io": false, 00:10:07.826 "nvme_io_md": false, 00:10:07.826 "write_zeroes": true, 00:10:07.826 "zcopy": true, 00:10:07.826 "get_zone_info": false, 00:10:07.826 "zone_management": false, 00:10:07.826 "zone_append": false, 00:10:07.826 "compare": false, 00:10:07.826 "compare_and_write": false, 00:10:07.826 "abort": true, 00:10:07.826 "seek_hole": false, 00:10:07.826 "seek_data": false, 00:10:07.826 "copy": true, 00:10:07.826 "nvme_iov_md": false 00:10:07.826 }, 00:10:07.826 "memory_domains": [ 00:10:07.826 { 00:10:07.826 "dma_device_id": "system", 00:10:07.826 "dma_device_type": 1 00:10:07.826 }, 00:10:07.826 { 00:10:07.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.826 "dma_device_type": 2 00:10:07.826 } 00:10:07.826 ], 00:10:07.826 "driver_specific": {} 00:10:07.826 } 00:10:07.826 ] 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.826 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.827 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.827 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.827 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.827 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.827 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.827 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.827 "name": "Existed_Raid", 00:10:07.827 "uuid": "b5ee3269-ae23-41e6-b1c1-aaa9f724b825", 00:10:07.827 "strip_size_kb": 0, 00:10:07.827 "state": "configuring", 00:10:07.827 "raid_level": "raid1", 00:10:07.827 "superblock": true, 00:10:07.827 "num_base_bdevs": 4, 00:10:07.827 "num_base_bdevs_discovered": 3, 00:10:07.827 "num_base_bdevs_operational": 4, 00:10:07.827 "base_bdevs_list": [ 00:10:07.827 { 00:10:07.827 "name": "BaseBdev1", 00:10:07.827 "uuid": "863bd448-4050-4f71-966c-d9c802b46f7e", 00:10:07.827 "is_configured": true, 00:10:07.827 "data_offset": 2048, 00:10:07.827 "data_size": 63488 00:10:07.827 }, 00:10:07.827 { 00:10:07.827 "name": null, 00:10:07.827 "uuid": "3efddf3f-b2d9-463e-81bb-411e5522c54f", 00:10:07.827 "is_configured": false, 00:10:07.827 "data_offset": 0, 00:10:07.827 "data_size": 63488 00:10:07.827 }, 00:10:07.827 { 00:10:07.827 "name": "BaseBdev3", 00:10:07.827 "uuid": "879acac5-7f84-452d-9cac-97507ea7aaea", 00:10:07.827 "is_configured": true, 00:10:07.827 "data_offset": 2048, 00:10:07.827 "data_size": 63488 00:10:07.827 }, 00:10:07.827 { 00:10:07.827 "name": "BaseBdev4", 00:10:07.827 "uuid": "36d7247a-2eeb-43ce-848e-477e4deda49b", 00:10:07.827 "is_configured": true, 00:10:07.827 "data_offset": 2048, 00:10:07.827 "data_size": 63488 00:10:07.827 } 00:10:07.827 ] 00:10:07.827 }' 00:10:07.827 21:42:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.827 21:42:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.398 [2024-11-27 21:42:31.402937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.398 "name": "Existed_Raid", 00:10:08.398 "uuid": "b5ee3269-ae23-41e6-b1c1-aaa9f724b825", 00:10:08.398 "strip_size_kb": 0, 00:10:08.398 "state": "configuring", 00:10:08.398 "raid_level": "raid1", 00:10:08.398 "superblock": true, 00:10:08.398 "num_base_bdevs": 4, 00:10:08.398 "num_base_bdevs_discovered": 2, 00:10:08.398 "num_base_bdevs_operational": 4, 00:10:08.398 "base_bdevs_list": [ 00:10:08.398 { 00:10:08.398 "name": "BaseBdev1", 00:10:08.398 "uuid": "863bd448-4050-4f71-966c-d9c802b46f7e", 00:10:08.398 "is_configured": true, 00:10:08.398 "data_offset": 2048, 00:10:08.398 "data_size": 63488 00:10:08.398 }, 00:10:08.398 { 00:10:08.398 "name": null, 00:10:08.398 "uuid": "3efddf3f-b2d9-463e-81bb-411e5522c54f", 00:10:08.398 "is_configured": false, 00:10:08.398 "data_offset": 0, 00:10:08.398 "data_size": 63488 00:10:08.398 }, 00:10:08.398 { 00:10:08.398 "name": null, 00:10:08.398 "uuid": "879acac5-7f84-452d-9cac-97507ea7aaea", 00:10:08.398 "is_configured": false, 00:10:08.398 "data_offset": 0, 00:10:08.398 "data_size": 63488 00:10:08.398 }, 00:10:08.398 { 00:10:08.398 "name": "BaseBdev4", 00:10:08.398 "uuid": "36d7247a-2eeb-43ce-848e-477e4deda49b", 00:10:08.398 "is_configured": true, 00:10:08.398 "data_offset": 2048, 00:10:08.398 "data_size": 63488 00:10:08.398 } 00:10:08.398 ] 00:10:08.398 }' 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.398 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.969 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.969 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:08.969 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.969 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.969 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.970 [2024-11-27 21:42:31.858134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.970 "name": "Existed_Raid", 00:10:08.970 "uuid": "b5ee3269-ae23-41e6-b1c1-aaa9f724b825", 00:10:08.970 "strip_size_kb": 0, 00:10:08.970 "state": "configuring", 00:10:08.970 "raid_level": "raid1", 00:10:08.970 "superblock": true, 00:10:08.970 "num_base_bdevs": 4, 00:10:08.970 "num_base_bdevs_discovered": 3, 00:10:08.970 "num_base_bdevs_operational": 4, 00:10:08.970 "base_bdevs_list": [ 00:10:08.970 { 00:10:08.970 "name": "BaseBdev1", 00:10:08.970 "uuid": "863bd448-4050-4f71-966c-d9c802b46f7e", 00:10:08.970 "is_configured": true, 00:10:08.970 "data_offset": 2048, 00:10:08.970 "data_size": 63488 00:10:08.970 }, 00:10:08.970 { 00:10:08.970 "name": null, 00:10:08.970 "uuid": "3efddf3f-b2d9-463e-81bb-411e5522c54f", 00:10:08.970 "is_configured": false, 00:10:08.970 "data_offset": 0, 00:10:08.970 "data_size": 63488 00:10:08.970 }, 00:10:08.970 { 00:10:08.970 "name": "BaseBdev3", 00:10:08.970 "uuid": "879acac5-7f84-452d-9cac-97507ea7aaea", 00:10:08.970 "is_configured": true, 00:10:08.970 "data_offset": 2048, 00:10:08.970 "data_size": 63488 00:10:08.970 }, 00:10:08.970 { 00:10:08.970 "name": "BaseBdev4", 00:10:08.970 "uuid": "36d7247a-2eeb-43ce-848e-477e4deda49b", 00:10:08.970 "is_configured": true, 00:10:08.970 "data_offset": 2048, 00:10:08.970 "data_size": 63488 00:10:08.970 } 00:10:08.970 ] 00:10:08.970 }' 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.970 21:42:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.229 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:09.229 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.229 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.229 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.229 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.489 [2024-11-27 21:42:32.361387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.489 "name": "Existed_Raid", 00:10:09.489 "uuid": "b5ee3269-ae23-41e6-b1c1-aaa9f724b825", 00:10:09.489 "strip_size_kb": 0, 00:10:09.489 "state": "configuring", 00:10:09.489 "raid_level": "raid1", 00:10:09.489 "superblock": true, 00:10:09.489 "num_base_bdevs": 4, 00:10:09.489 "num_base_bdevs_discovered": 2, 00:10:09.489 "num_base_bdevs_operational": 4, 00:10:09.489 "base_bdevs_list": [ 00:10:09.489 { 00:10:09.489 "name": null, 00:10:09.489 "uuid": "863bd448-4050-4f71-966c-d9c802b46f7e", 00:10:09.489 "is_configured": false, 00:10:09.489 "data_offset": 0, 00:10:09.489 "data_size": 63488 00:10:09.489 }, 00:10:09.489 { 00:10:09.489 "name": null, 00:10:09.489 "uuid": "3efddf3f-b2d9-463e-81bb-411e5522c54f", 00:10:09.489 "is_configured": false, 00:10:09.489 "data_offset": 0, 00:10:09.489 "data_size": 63488 00:10:09.489 }, 00:10:09.489 { 00:10:09.489 "name": "BaseBdev3", 00:10:09.489 "uuid": "879acac5-7f84-452d-9cac-97507ea7aaea", 00:10:09.489 "is_configured": true, 00:10:09.489 "data_offset": 2048, 00:10:09.489 "data_size": 63488 00:10:09.489 }, 00:10:09.489 { 00:10:09.489 "name": "BaseBdev4", 00:10:09.489 "uuid": "36d7247a-2eeb-43ce-848e-477e4deda49b", 00:10:09.489 "is_configured": true, 00:10:09.489 "data_offset": 2048, 00:10:09.489 "data_size": 63488 00:10:09.489 } 00:10:09.489 ] 00:10:09.489 }' 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.489 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.748 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.748 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.748 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.748 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:09.748 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.008 [2024-11-27 21:42:32.887089] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.008 "name": "Existed_Raid", 00:10:10.008 "uuid": "b5ee3269-ae23-41e6-b1c1-aaa9f724b825", 00:10:10.008 "strip_size_kb": 0, 00:10:10.008 "state": "configuring", 00:10:10.008 "raid_level": "raid1", 00:10:10.008 "superblock": true, 00:10:10.008 "num_base_bdevs": 4, 00:10:10.008 "num_base_bdevs_discovered": 3, 00:10:10.008 "num_base_bdevs_operational": 4, 00:10:10.008 "base_bdevs_list": [ 00:10:10.008 { 00:10:10.008 "name": null, 00:10:10.008 "uuid": "863bd448-4050-4f71-966c-d9c802b46f7e", 00:10:10.008 "is_configured": false, 00:10:10.008 "data_offset": 0, 00:10:10.008 "data_size": 63488 00:10:10.008 }, 00:10:10.008 { 00:10:10.008 "name": "BaseBdev2", 00:10:10.008 "uuid": "3efddf3f-b2d9-463e-81bb-411e5522c54f", 00:10:10.008 "is_configured": true, 00:10:10.008 "data_offset": 2048, 00:10:10.008 "data_size": 63488 00:10:10.008 }, 00:10:10.008 { 00:10:10.008 "name": "BaseBdev3", 00:10:10.008 "uuid": "879acac5-7f84-452d-9cac-97507ea7aaea", 00:10:10.008 "is_configured": true, 00:10:10.008 "data_offset": 2048, 00:10:10.008 "data_size": 63488 00:10:10.008 }, 00:10:10.008 { 00:10:10.008 "name": "BaseBdev4", 00:10:10.008 "uuid": "36d7247a-2eeb-43ce-848e-477e4deda49b", 00:10:10.008 "is_configured": true, 00:10:10.008 "data_offset": 2048, 00:10:10.008 "data_size": 63488 00:10:10.008 } 00:10:10.008 ] 00:10:10.008 }' 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.008 21:42:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.267 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.267 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.267 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.267 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:10.267 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.267 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:10.267 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.267 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.267 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.267 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:10.267 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.267 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 863bd448-4050-4f71-966c-d9c802b46f7e 00:10:10.267 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.267 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.527 NewBaseBdev 00:10:10.527 [2024-11-27 21:42:33.393132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:10.527 [2024-11-27 21:42:33.393315] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:10.527 [2024-11-27 21:42:33.393331] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:10.527 [2024-11-27 21:42:33.393567] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:10:10.527 [2024-11-27 21:42:33.393700] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:10.527 [2024-11-27 21:42:33.393710] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:10:10.527 [2024-11-27 21:42:33.393864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.527 [ 00:10:10.527 { 00:10:10.527 "name": "NewBaseBdev", 00:10:10.527 "aliases": [ 00:10:10.527 "863bd448-4050-4f71-966c-d9c802b46f7e" 00:10:10.527 ], 00:10:10.527 "product_name": "Malloc disk", 00:10:10.527 "block_size": 512, 00:10:10.527 "num_blocks": 65536, 00:10:10.527 "uuid": "863bd448-4050-4f71-966c-d9c802b46f7e", 00:10:10.527 "assigned_rate_limits": { 00:10:10.527 "rw_ios_per_sec": 0, 00:10:10.527 "rw_mbytes_per_sec": 0, 00:10:10.527 "r_mbytes_per_sec": 0, 00:10:10.527 "w_mbytes_per_sec": 0 00:10:10.527 }, 00:10:10.527 "claimed": true, 00:10:10.527 "claim_type": "exclusive_write", 00:10:10.527 "zoned": false, 00:10:10.527 "supported_io_types": { 00:10:10.527 "read": true, 00:10:10.527 "write": true, 00:10:10.527 "unmap": true, 00:10:10.527 "flush": true, 00:10:10.527 "reset": true, 00:10:10.527 "nvme_admin": false, 00:10:10.527 "nvme_io": false, 00:10:10.527 "nvme_io_md": false, 00:10:10.527 "write_zeroes": true, 00:10:10.527 "zcopy": true, 00:10:10.527 "get_zone_info": false, 00:10:10.527 "zone_management": false, 00:10:10.527 "zone_append": false, 00:10:10.527 "compare": false, 00:10:10.527 "compare_and_write": false, 00:10:10.527 "abort": true, 00:10:10.527 "seek_hole": false, 00:10:10.527 "seek_data": false, 00:10:10.527 "copy": true, 00:10:10.527 "nvme_iov_md": false 00:10:10.527 }, 00:10:10.527 "memory_domains": [ 00:10:10.527 { 00:10:10.527 "dma_device_id": "system", 00:10:10.527 "dma_device_type": 1 00:10:10.527 }, 00:10:10.527 { 00:10:10.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.527 "dma_device_type": 2 00:10:10.527 } 00:10:10.527 ], 00:10:10.527 "driver_specific": {} 00:10:10.527 } 00:10:10.527 ] 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.527 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.528 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.528 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.528 "name": "Existed_Raid", 00:10:10.528 "uuid": "b5ee3269-ae23-41e6-b1c1-aaa9f724b825", 00:10:10.528 "strip_size_kb": 0, 00:10:10.528 "state": "online", 00:10:10.528 "raid_level": "raid1", 00:10:10.528 "superblock": true, 00:10:10.528 "num_base_bdevs": 4, 00:10:10.528 "num_base_bdevs_discovered": 4, 00:10:10.528 "num_base_bdevs_operational": 4, 00:10:10.528 "base_bdevs_list": [ 00:10:10.528 { 00:10:10.528 "name": "NewBaseBdev", 00:10:10.528 "uuid": "863bd448-4050-4f71-966c-d9c802b46f7e", 00:10:10.528 "is_configured": true, 00:10:10.528 "data_offset": 2048, 00:10:10.528 "data_size": 63488 00:10:10.528 }, 00:10:10.528 { 00:10:10.528 "name": "BaseBdev2", 00:10:10.528 "uuid": "3efddf3f-b2d9-463e-81bb-411e5522c54f", 00:10:10.528 "is_configured": true, 00:10:10.528 "data_offset": 2048, 00:10:10.528 "data_size": 63488 00:10:10.528 }, 00:10:10.528 { 00:10:10.528 "name": "BaseBdev3", 00:10:10.528 "uuid": "879acac5-7f84-452d-9cac-97507ea7aaea", 00:10:10.528 "is_configured": true, 00:10:10.528 "data_offset": 2048, 00:10:10.528 "data_size": 63488 00:10:10.528 }, 00:10:10.528 { 00:10:10.528 "name": "BaseBdev4", 00:10:10.528 "uuid": "36d7247a-2eeb-43ce-848e-477e4deda49b", 00:10:10.528 "is_configured": true, 00:10:10.528 "data_offset": 2048, 00:10:10.528 "data_size": 63488 00:10:10.528 } 00:10:10.528 ] 00:10:10.528 }' 00:10:10.528 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.528 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.788 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.788 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.788 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.788 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.788 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.788 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.788 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.788 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.788 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.788 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.788 [2024-11-27 21:42:33.864724] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.788 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.788 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.788 "name": "Existed_Raid", 00:10:10.788 "aliases": [ 00:10:10.788 "b5ee3269-ae23-41e6-b1c1-aaa9f724b825" 00:10:10.788 ], 00:10:10.788 "product_name": "Raid Volume", 00:10:10.788 "block_size": 512, 00:10:10.788 "num_blocks": 63488, 00:10:10.788 "uuid": "b5ee3269-ae23-41e6-b1c1-aaa9f724b825", 00:10:10.788 "assigned_rate_limits": { 00:10:10.788 "rw_ios_per_sec": 0, 00:10:10.788 "rw_mbytes_per_sec": 0, 00:10:10.788 "r_mbytes_per_sec": 0, 00:10:10.788 "w_mbytes_per_sec": 0 00:10:10.788 }, 00:10:10.788 "claimed": false, 00:10:10.788 "zoned": false, 00:10:10.788 "supported_io_types": { 00:10:10.788 "read": true, 00:10:10.788 "write": true, 00:10:10.788 "unmap": false, 00:10:10.788 "flush": false, 00:10:10.788 "reset": true, 00:10:10.788 "nvme_admin": false, 00:10:10.788 "nvme_io": false, 00:10:10.788 "nvme_io_md": false, 00:10:10.788 "write_zeroes": true, 00:10:10.788 "zcopy": false, 00:10:10.788 "get_zone_info": false, 00:10:10.788 "zone_management": false, 00:10:10.788 "zone_append": false, 00:10:10.788 "compare": false, 00:10:10.788 "compare_and_write": false, 00:10:10.788 "abort": false, 00:10:10.788 "seek_hole": false, 00:10:10.788 "seek_data": false, 00:10:10.788 "copy": false, 00:10:10.788 "nvme_iov_md": false 00:10:10.788 }, 00:10:10.788 "memory_domains": [ 00:10:10.788 { 00:10:10.788 "dma_device_id": "system", 00:10:10.788 "dma_device_type": 1 00:10:10.788 }, 00:10:10.788 { 00:10:10.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.788 "dma_device_type": 2 00:10:10.788 }, 00:10:10.788 { 00:10:10.788 "dma_device_id": "system", 00:10:10.788 "dma_device_type": 1 00:10:10.788 }, 00:10:10.788 { 00:10:10.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.788 "dma_device_type": 2 00:10:10.788 }, 00:10:10.788 { 00:10:10.788 "dma_device_id": "system", 00:10:10.788 "dma_device_type": 1 00:10:10.788 }, 00:10:10.788 { 00:10:10.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.788 "dma_device_type": 2 00:10:10.788 }, 00:10:10.788 { 00:10:10.788 "dma_device_id": "system", 00:10:10.788 "dma_device_type": 1 00:10:10.788 }, 00:10:10.788 { 00:10:10.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.788 "dma_device_type": 2 00:10:10.788 } 00:10:10.788 ], 00:10:10.788 "driver_specific": { 00:10:10.788 "raid": { 00:10:10.788 "uuid": "b5ee3269-ae23-41e6-b1c1-aaa9f724b825", 00:10:10.788 "strip_size_kb": 0, 00:10:10.788 "state": "online", 00:10:10.788 "raid_level": "raid1", 00:10:10.788 "superblock": true, 00:10:10.788 "num_base_bdevs": 4, 00:10:10.788 "num_base_bdevs_discovered": 4, 00:10:10.788 "num_base_bdevs_operational": 4, 00:10:10.788 "base_bdevs_list": [ 00:10:10.788 { 00:10:10.788 "name": "NewBaseBdev", 00:10:10.788 "uuid": "863bd448-4050-4f71-966c-d9c802b46f7e", 00:10:10.788 "is_configured": true, 00:10:10.788 "data_offset": 2048, 00:10:10.788 "data_size": 63488 00:10:10.788 }, 00:10:10.788 { 00:10:10.788 "name": "BaseBdev2", 00:10:10.788 "uuid": "3efddf3f-b2d9-463e-81bb-411e5522c54f", 00:10:10.788 "is_configured": true, 00:10:10.788 "data_offset": 2048, 00:10:10.788 "data_size": 63488 00:10:10.788 }, 00:10:10.788 { 00:10:10.788 "name": "BaseBdev3", 00:10:10.788 "uuid": "879acac5-7f84-452d-9cac-97507ea7aaea", 00:10:10.788 "is_configured": true, 00:10:10.788 "data_offset": 2048, 00:10:10.788 "data_size": 63488 00:10:10.788 }, 00:10:10.788 { 00:10:10.788 "name": "BaseBdev4", 00:10:10.788 "uuid": "36d7247a-2eeb-43ce-848e-477e4deda49b", 00:10:10.788 "is_configured": true, 00:10:10.788 "data_offset": 2048, 00:10:10.788 "data_size": 63488 00:10:10.788 } 00:10:10.788 ] 00:10:10.788 } 00:10:10.788 } 00:10:10.788 }' 00:10:10.788 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.048 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:11.048 BaseBdev2 00:10:11.048 BaseBdev3 00:10:11.048 BaseBdev4' 00:10:11.048 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.048 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.048 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.048 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:11.048 21:42:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.048 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.048 21:42:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.048 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.307 [2024-11-27 21:42:34.171884] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.307 [2024-11-27 21:42:34.171911] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:11.307 [2024-11-27 21:42:34.171981] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:11.307 [2024-11-27 21:42:34.172278] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:11.307 [2024-11-27 21:42:34.172293] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 84343 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84343 ']' 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 84343 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84343 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.307 killing process with pid 84343 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84343' 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 84343 00:10:11.307 [2024-11-27 21:42:34.221556] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:11.307 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 84343 00:10:11.307 [2024-11-27 21:42:34.262435] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:11.566 21:42:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:11.566 00:10:11.566 real 0m9.521s 00:10:11.566 user 0m16.322s 00:10:11.566 sys 0m2.006s 00:10:11.566 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.566 ************************************ 00:10:11.566 END TEST raid_state_function_test_sb 00:10:11.566 ************************************ 00:10:11.566 21:42:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.566 21:42:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:10:11.566 21:42:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:11.566 21:42:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.566 21:42:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.566 ************************************ 00:10:11.566 START TEST raid_superblock_test 00:10:11.566 ************************************ 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84990 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84990 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84990 ']' 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.566 21:42:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.566 [2024-11-27 21:42:34.625679] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:10:11.566 [2024-11-27 21:42:34.625813] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84990 ] 00:10:11.825 [2024-11-27 21:42:34.758256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.825 [2024-11-27 21:42:34.784196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.825 [2024-11-27 21:42:34.826851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.825 [2024-11-27 21:42:34.826896] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.394 malloc1 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.394 [2024-11-27 21:42:35.497745] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:12.394 [2024-11-27 21:42:35.497870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.394 [2024-11-27 21:42:35.497908] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:12.394 [2024-11-27 21:42:35.497960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.394 [2024-11-27 21:42:35.500061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.394 [2024-11-27 21:42:35.500152] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:12.394 pt1 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.394 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.654 malloc2 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.654 [2024-11-27 21:42:35.530279] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.654 [2024-11-27 21:42:35.530387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.654 [2024-11-27 21:42:35.530411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:12.654 [2024-11-27 21:42:35.530422] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.654 [2024-11-27 21:42:35.532601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.654 [2024-11-27 21:42:35.532637] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.654 pt2 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.654 malloc3 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.654 [2024-11-27 21:42:35.558639] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:12.654 [2024-11-27 21:42:35.558730] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.654 [2024-11-27 21:42:35.558765] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:12.654 [2024-11-27 21:42:35.558820] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.654 [2024-11-27 21:42:35.560958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.654 [2024-11-27 21:42:35.561027] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:12.654 pt3 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.654 malloc4 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.654 [2024-11-27 21:42:35.609074] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:12.654 [2024-11-27 21:42:35.609178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.654 [2024-11-27 21:42:35.609218] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:12.654 [2024-11-27 21:42:35.609266] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.654 [2024-11-27 21:42:35.611820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.654 [2024-11-27 21:42:35.611897] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:12.654 pt4 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.654 [2024-11-27 21:42:35.621066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:12.654 [2024-11-27 21:42:35.622927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.654 [2024-11-27 21:42:35.623006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:12.654 [2024-11-27 21:42:35.623072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:12.654 [2024-11-27 21:42:35.623227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:12.654 [2024-11-27 21:42:35.623239] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:12.654 [2024-11-27 21:42:35.623472] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:10:12.654 [2024-11-27 21:42:35.623610] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:12.654 [2024-11-27 21:42:35.623619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:12.654 [2024-11-27 21:42:35.623751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.654 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.654 "name": "raid_bdev1", 00:10:12.654 "uuid": "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b", 00:10:12.654 "strip_size_kb": 0, 00:10:12.654 "state": "online", 00:10:12.654 "raid_level": "raid1", 00:10:12.654 "superblock": true, 00:10:12.654 "num_base_bdevs": 4, 00:10:12.654 "num_base_bdevs_discovered": 4, 00:10:12.654 "num_base_bdevs_operational": 4, 00:10:12.654 "base_bdevs_list": [ 00:10:12.654 { 00:10:12.654 "name": "pt1", 00:10:12.654 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.654 "is_configured": true, 00:10:12.654 "data_offset": 2048, 00:10:12.654 "data_size": 63488 00:10:12.654 }, 00:10:12.654 { 00:10:12.654 "name": "pt2", 00:10:12.654 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.654 "is_configured": true, 00:10:12.654 "data_offset": 2048, 00:10:12.654 "data_size": 63488 00:10:12.654 }, 00:10:12.655 { 00:10:12.655 "name": "pt3", 00:10:12.655 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.655 "is_configured": true, 00:10:12.655 "data_offset": 2048, 00:10:12.655 "data_size": 63488 00:10:12.655 }, 00:10:12.655 { 00:10:12.655 "name": "pt4", 00:10:12.655 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:12.655 "is_configured": true, 00:10:12.655 "data_offset": 2048, 00:10:12.655 "data_size": 63488 00:10:12.655 } 00:10:12.655 ] 00:10:12.655 }' 00:10:12.655 21:42:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.655 21:42:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.914 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:12.914 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:12.914 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.914 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.914 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.914 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.914 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.914 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.914 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.914 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.174 [2024-11-27 21:42:36.036723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.174 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.174 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:13.174 "name": "raid_bdev1", 00:10:13.174 "aliases": [ 00:10:13.174 "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b" 00:10:13.174 ], 00:10:13.174 "product_name": "Raid Volume", 00:10:13.174 "block_size": 512, 00:10:13.174 "num_blocks": 63488, 00:10:13.174 "uuid": "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b", 00:10:13.174 "assigned_rate_limits": { 00:10:13.174 "rw_ios_per_sec": 0, 00:10:13.174 "rw_mbytes_per_sec": 0, 00:10:13.174 "r_mbytes_per_sec": 0, 00:10:13.174 "w_mbytes_per_sec": 0 00:10:13.174 }, 00:10:13.174 "claimed": false, 00:10:13.174 "zoned": false, 00:10:13.174 "supported_io_types": { 00:10:13.174 "read": true, 00:10:13.174 "write": true, 00:10:13.174 "unmap": false, 00:10:13.174 "flush": false, 00:10:13.174 "reset": true, 00:10:13.174 "nvme_admin": false, 00:10:13.174 "nvme_io": false, 00:10:13.174 "nvme_io_md": false, 00:10:13.174 "write_zeroes": true, 00:10:13.174 "zcopy": false, 00:10:13.174 "get_zone_info": false, 00:10:13.174 "zone_management": false, 00:10:13.174 "zone_append": false, 00:10:13.174 "compare": false, 00:10:13.174 "compare_and_write": false, 00:10:13.174 "abort": false, 00:10:13.174 "seek_hole": false, 00:10:13.174 "seek_data": false, 00:10:13.174 "copy": false, 00:10:13.174 "nvme_iov_md": false 00:10:13.174 }, 00:10:13.174 "memory_domains": [ 00:10:13.174 { 00:10:13.174 "dma_device_id": "system", 00:10:13.174 "dma_device_type": 1 00:10:13.174 }, 00:10:13.174 { 00:10:13.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.174 "dma_device_type": 2 00:10:13.174 }, 00:10:13.174 { 00:10:13.174 "dma_device_id": "system", 00:10:13.174 "dma_device_type": 1 00:10:13.174 }, 00:10:13.174 { 00:10:13.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.174 "dma_device_type": 2 00:10:13.174 }, 00:10:13.174 { 00:10:13.174 "dma_device_id": "system", 00:10:13.174 "dma_device_type": 1 00:10:13.174 }, 00:10:13.174 { 00:10:13.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.174 "dma_device_type": 2 00:10:13.174 }, 00:10:13.174 { 00:10:13.174 "dma_device_id": "system", 00:10:13.174 "dma_device_type": 1 00:10:13.174 }, 00:10:13.174 { 00:10:13.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.174 "dma_device_type": 2 00:10:13.174 } 00:10:13.174 ], 00:10:13.174 "driver_specific": { 00:10:13.174 "raid": { 00:10:13.174 "uuid": "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b", 00:10:13.174 "strip_size_kb": 0, 00:10:13.174 "state": "online", 00:10:13.174 "raid_level": "raid1", 00:10:13.174 "superblock": true, 00:10:13.174 "num_base_bdevs": 4, 00:10:13.174 "num_base_bdevs_discovered": 4, 00:10:13.174 "num_base_bdevs_operational": 4, 00:10:13.174 "base_bdevs_list": [ 00:10:13.174 { 00:10:13.174 "name": "pt1", 00:10:13.174 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.174 "is_configured": true, 00:10:13.174 "data_offset": 2048, 00:10:13.174 "data_size": 63488 00:10:13.174 }, 00:10:13.174 { 00:10:13.174 "name": "pt2", 00:10:13.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.174 "is_configured": true, 00:10:13.174 "data_offset": 2048, 00:10:13.174 "data_size": 63488 00:10:13.174 }, 00:10:13.174 { 00:10:13.174 "name": "pt3", 00:10:13.174 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.174 "is_configured": true, 00:10:13.174 "data_offset": 2048, 00:10:13.174 "data_size": 63488 00:10:13.174 }, 00:10:13.174 { 00:10:13.175 "name": "pt4", 00:10:13.175 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:13.175 "is_configured": true, 00:10:13.175 "data_offset": 2048, 00:10:13.175 "data_size": 63488 00:10:13.175 } 00:10:13.175 ] 00:10:13.175 } 00:10:13.175 } 00:10:13.175 }' 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:13.175 pt2 00:10:13.175 pt3 00:10:13.175 pt4' 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.175 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.435 [2024-11-27 21:42:36.380098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a6d14ed0-e502-4f6d-a995-eaa53e0fe13b 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a6d14ed0-e502-4f6d-a995-eaa53e0fe13b ']' 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.435 [2024-11-27 21:42:36.423699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.435 [2024-11-27 21:42:36.423727] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.435 [2024-11-27 21:42:36.423821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.435 [2024-11-27 21:42:36.423929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.435 [2024-11-27 21:42:36.423960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.435 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.436 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.436 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:13.436 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:13.436 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.436 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.696 [2024-11-27 21:42:36.587486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:13.696 [2024-11-27 21:42:36.589480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:13.696 [2024-11-27 21:42:36.589570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:13.696 [2024-11-27 21:42:36.589619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:13.696 [2024-11-27 21:42:36.589708] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:13.696 [2024-11-27 21:42:36.589843] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:13.696 [2024-11-27 21:42:36.589911] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:13.696 [2024-11-27 21:42:36.589983] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:13.696 [2024-11-27 21:42:36.590046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.696 [2024-11-27 21:42:36.590094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:10:13.696 request: 00:10:13.696 { 00:10:13.696 "name": "raid_bdev1", 00:10:13.696 "raid_level": "raid1", 00:10:13.696 "base_bdevs": [ 00:10:13.696 "malloc1", 00:10:13.696 "malloc2", 00:10:13.696 "malloc3", 00:10:13.696 "malloc4" 00:10:13.696 ], 00:10:13.696 "superblock": false, 00:10:13.696 "method": "bdev_raid_create", 00:10:13.696 "req_id": 1 00:10:13.696 } 00:10:13.696 Got JSON-RPC error response 00:10:13.696 response: 00:10:13.696 { 00:10:13.696 "code": -17, 00:10:13.696 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:13.696 } 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.696 [2024-11-27 21:42:36.651346] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:13.696 [2024-11-27 21:42:36.651473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.696 [2024-11-27 21:42:36.651512] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:13.696 [2024-11-27 21:42:36.651522] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.696 [2024-11-27 21:42:36.653780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.696 [2024-11-27 21:42:36.653831] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:13.696 [2024-11-27 21:42:36.653919] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:13.696 [2024-11-27 21:42:36.653964] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:13.696 pt1 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.696 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.696 "name": "raid_bdev1", 00:10:13.696 "uuid": "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b", 00:10:13.696 "strip_size_kb": 0, 00:10:13.696 "state": "configuring", 00:10:13.696 "raid_level": "raid1", 00:10:13.696 "superblock": true, 00:10:13.696 "num_base_bdevs": 4, 00:10:13.696 "num_base_bdevs_discovered": 1, 00:10:13.696 "num_base_bdevs_operational": 4, 00:10:13.696 "base_bdevs_list": [ 00:10:13.696 { 00:10:13.696 "name": "pt1", 00:10:13.697 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:13.697 "is_configured": true, 00:10:13.697 "data_offset": 2048, 00:10:13.697 "data_size": 63488 00:10:13.697 }, 00:10:13.697 { 00:10:13.697 "name": null, 00:10:13.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.697 "is_configured": false, 00:10:13.697 "data_offset": 2048, 00:10:13.697 "data_size": 63488 00:10:13.697 }, 00:10:13.697 { 00:10:13.697 "name": null, 00:10:13.697 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.697 "is_configured": false, 00:10:13.697 "data_offset": 2048, 00:10:13.697 "data_size": 63488 00:10:13.697 }, 00:10:13.697 { 00:10:13.697 "name": null, 00:10:13.697 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:13.697 "is_configured": false, 00:10:13.697 "data_offset": 2048, 00:10:13.697 "data_size": 63488 00:10:13.697 } 00:10:13.697 ] 00:10:13.697 }' 00:10:13.697 21:42:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.697 21:42:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.264 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:14.264 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:14.264 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.264 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.264 [2024-11-27 21:42:37.114565] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:14.264 [2024-11-27 21:42:37.114634] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.264 [2024-11-27 21:42:37.114657] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:14.265 [2024-11-27 21:42:37.114666] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.265 [2024-11-27 21:42:37.115091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.265 [2024-11-27 21:42:37.115108] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:14.265 [2024-11-27 21:42:37.115185] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:14.265 [2024-11-27 21:42:37.115206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:14.265 pt2 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.265 [2024-11-27 21:42:37.126568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.265 "name": "raid_bdev1", 00:10:14.265 "uuid": "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b", 00:10:14.265 "strip_size_kb": 0, 00:10:14.265 "state": "configuring", 00:10:14.265 "raid_level": "raid1", 00:10:14.265 "superblock": true, 00:10:14.265 "num_base_bdevs": 4, 00:10:14.265 "num_base_bdevs_discovered": 1, 00:10:14.265 "num_base_bdevs_operational": 4, 00:10:14.265 "base_bdevs_list": [ 00:10:14.265 { 00:10:14.265 "name": "pt1", 00:10:14.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.265 "is_configured": true, 00:10:14.265 "data_offset": 2048, 00:10:14.265 "data_size": 63488 00:10:14.265 }, 00:10:14.265 { 00:10:14.265 "name": null, 00:10:14.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.265 "is_configured": false, 00:10:14.265 "data_offset": 0, 00:10:14.265 "data_size": 63488 00:10:14.265 }, 00:10:14.265 { 00:10:14.265 "name": null, 00:10:14.265 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.265 "is_configured": false, 00:10:14.265 "data_offset": 2048, 00:10:14.265 "data_size": 63488 00:10:14.265 }, 00:10:14.265 { 00:10:14.265 "name": null, 00:10:14.265 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:14.265 "is_configured": false, 00:10:14.265 "data_offset": 2048, 00:10:14.265 "data_size": 63488 00:10:14.265 } 00:10:14.265 ] 00:10:14.265 }' 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.265 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.525 [2024-11-27 21:42:37.601753] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:14.525 [2024-11-27 21:42:37.601902] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.525 [2024-11-27 21:42:37.601939] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:14.525 [2024-11-27 21:42:37.601968] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.525 [2024-11-27 21:42:37.602467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.525 [2024-11-27 21:42:37.602528] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:14.525 [2024-11-27 21:42:37.602648] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:14.525 [2024-11-27 21:42:37.602702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:14.525 pt2 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.525 [2024-11-27 21:42:37.613693] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:14.525 [2024-11-27 21:42:37.613749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.525 [2024-11-27 21:42:37.613766] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:14.525 [2024-11-27 21:42:37.613776] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.525 [2024-11-27 21:42:37.614210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.525 [2024-11-27 21:42:37.614243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:14.525 [2024-11-27 21:42:37.614328] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:14.525 [2024-11-27 21:42:37.614364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:14.525 pt3 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.525 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.525 [2024-11-27 21:42:37.625663] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:14.525 [2024-11-27 21:42:37.625763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.525 [2024-11-27 21:42:37.625795] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:14.525 [2024-11-27 21:42:37.625837] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.525 [2024-11-27 21:42:37.626198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.525 [2024-11-27 21:42:37.626256] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:14.525 [2024-11-27 21:42:37.626362] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:14.526 [2024-11-27 21:42:37.626413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:14.526 [2024-11-27 21:42:37.626581] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:10:14.526 [2024-11-27 21:42:37.626625] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:14.526 [2024-11-27 21:42:37.626903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:14.526 [2024-11-27 21:42:37.627087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:10:14.526 [2024-11-27 21:42:37.627129] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:10:14.526 [2024-11-27 21:42:37.627307] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.526 pt4 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.526 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.786 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.786 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.786 "name": "raid_bdev1", 00:10:14.786 "uuid": "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b", 00:10:14.786 "strip_size_kb": 0, 00:10:14.786 "state": "online", 00:10:14.786 "raid_level": "raid1", 00:10:14.786 "superblock": true, 00:10:14.786 "num_base_bdevs": 4, 00:10:14.786 "num_base_bdevs_discovered": 4, 00:10:14.786 "num_base_bdevs_operational": 4, 00:10:14.786 "base_bdevs_list": [ 00:10:14.786 { 00:10:14.786 "name": "pt1", 00:10:14.786 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:14.786 "is_configured": true, 00:10:14.786 "data_offset": 2048, 00:10:14.786 "data_size": 63488 00:10:14.786 }, 00:10:14.786 { 00:10:14.786 "name": "pt2", 00:10:14.786 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.786 "is_configured": true, 00:10:14.786 "data_offset": 2048, 00:10:14.786 "data_size": 63488 00:10:14.786 }, 00:10:14.786 { 00:10:14.786 "name": "pt3", 00:10:14.786 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.786 "is_configured": true, 00:10:14.786 "data_offset": 2048, 00:10:14.786 "data_size": 63488 00:10:14.786 }, 00:10:14.786 { 00:10:14.786 "name": "pt4", 00:10:14.786 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:14.786 "is_configured": true, 00:10:14.786 "data_offset": 2048, 00:10:14.786 "data_size": 63488 00:10:14.786 } 00:10:14.786 ] 00:10:14.786 }' 00:10:14.786 21:42:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.786 21:42:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.046 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:15.046 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:15.046 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.046 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.046 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.046 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.046 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.046 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.046 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.046 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.046 [2024-11-27 21:42:38.117196] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.046 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.046 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.046 "name": "raid_bdev1", 00:10:15.046 "aliases": [ 00:10:15.046 "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b" 00:10:15.046 ], 00:10:15.046 "product_name": "Raid Volume", 00:10:15.046 "block_size": 512, 00:10:15.046 "num_blocks": 63488, 00:10:15.046 "uuid": "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b", 00:10:15.046 "assigned_rate_limits": { 00:10:15.046 "rw_ios_per_sec": 0, 00:10:15.046 "rw_mbytes_per_sec": 0, 00:10:15.046 "r_mbytes_per_sec": 0, 00:10:15.046 "w_mbytes_per_sec": 0 00:10:15.046 }, 00:10:15.046 "claimed": false, 00:10:15.046 "zoned": false, 00:10:15.046 "supported_io_types": { 00:10:15.046 "read": true, 00:10:15.046 "write": true, 00:10:15.046 "unmap": false, 00:10:15.046 "flush": false, 00:10:15.046 "reset": true, 00:10:15.046 "nvme_admin": false, 00:10:15.046 "nvme_io": false, 00:10:15.046 "nvme_io_md": false, 00:10:15.046 "write_zeroes": true, 00:10:15.046 "zcopy": false, 00:10:15.046 "get_zone_info": false, 00:10:15.046 "zone_management": false, 00:10:15.046 "zone_append": false, 00:10:15.046 "compare": false, 00:10:15.046 "compare_and_write": false, 00:10:15.046 "abort": false, 00:10:15.046 "seek_hole": false, 00:10:15.046 "seek_data": false, 00:10:15.046 "copy": false, 00:10:15.046 "nvme_iov_md": false 00:10:15.046 }, 00:10:15.046 "memory_domains": [ 00:10:15.046 { 00:10:15.046 "dma_device_id": "system", 00:10:15.046 "dma_device_type": 1 00:10:15.046 }, 00:10:15.046 { 00:10:15.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.046 "dma_device_type": 2 00:10:15.046 }, 00:10:15.046 { 00:10:15.046 "dma_device_id": "system", 00:10:15.046 "dma_device_type": 1 00:10:15.047 }, 00:10:15.047 { 00:10:15.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.047 "dma_device_type": 2 00:10:15.047 }, 00:10:15.047 { 00:10:15.047 "dma_device_id": "system", 00:10:15.047 "dma_device_type": 1 00:10:15.047 }, 00:10:15.047 { 00:10:15.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.047 "dma_device_type": 2 00:10:15.047 }, 00:10:15.047 { 00:10:15.047 "dma_device_id": "system", 00:10:15.047 "dma_device_type": 1 00:10:15.047 }, 00:10:15.047 { 00:10:15.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.047 "dma_device_type": 2 00:10:15.047 } 00:10:15.047 ], 00:10:15.047 "driver_specific": { 00:10:15.047 "raid": { 00:10:15.047 "uuid": "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b", 00:10:15.047 "strip_size_kb": 0, 00:10:15.047 "state": "online", 00:10:15.047 "raid_level": "raid1", 00:10:15.047 "superblock": true, 00:10:15.047 "num_base_bdevs": 4, 00:10:15.047 "num_base_bdevs_discovered": 4, 00:10:15.047 "num_base_bdevs_operational": 4, 00:10:15.047 "base_bdevs_list": [ 00:10:15.047 { 00:10:15.047 "name": "pt1", 00:10:15.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:15.047 "is_configured": true, 00:10:15.047 "data_offset": 2048, 00:10:15.047 "data_size": 63488 00:10:15.047 }, 00:10:15.047 { 00:10:15.047 "name": "pt2", 00:10:15.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.047 "is_configured": true, 00:10:15.047 "data_offset": 2048, 00:10:15.047 "data_size": 63488 00:10:15.047 }, 00:10:15.047 { 00:10:15.047 "name": "pt3", 00:10:15.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.047 "is_configured": true, 00:10:15.047 "data_offset": 2048, 00:10:15.047 "data_size": 63488 00:10:15.047 }, 00:10:15.047 { 00:10:15.047 "name": "pt4", 00:10:15.047 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:15.047 "is_configured": true, 00:10:15.047 "data_offset": 2048, 00:10:15.047 "data_size": 63488 00:10:15.047 } 00:10:15.047 ] 00:10:15.047 } 00:10:15.047 } 00:10:15.047 }' 00:10:15.047 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:15.307 pt2 00:10:15.307 pt3 00:10:15.307 pt4' 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.307 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.567 [2024-11-27 21:42:38.456619] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a6d14ed0-e502-4f6d-a995-eaa53e0fe13b '!=' a6d14ed0-e502-4f6d-a995-eaa53e0fe13b ']' 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.567 [2024-11-27 21:42:38.488260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.567 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.567 "name": "raid_bdev1", 00:10:15.567 "uuid": "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b", 00:10:15.567 "strip_size_kb": 0, 00:10:15.567 "state": "online", 00:10:15.567 "raid_level": "raid1", 00:10:15.567 "superblock": true, 00:10:15.567 "num_base_bdevs": 4, 00:10:15.567 "num_base_bdevs_discovered": 3, 00:10:15.567 "num_base_bdevs_operational": 3, 00:10:15.567 "base_bdevs_list": [ 00:10:15.567 { 00:10:15.567 "name": null, 00:10:15.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.567 "is_configured": false, 00:10:15.567 "data_offset": 0, 00:10:15.567 "data_size": 63488 00:10:15.567 }, 00:10:15.567 { 00:10:15.567 "name": "pt2", 00:10:15.567 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:15.567 "is_configured": true, 00:10:15.567 "data_offset": 2048, 00:10:15.567 "data_size": 63488 00:10:15.567 }, 00:10:15.567 { 00:10:15.567 "name": "pt3", 00:10:15.567 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:15.567 "is_configured": true, 00:10:15.567 "data_offset": 2048, 00:10:15.567 "data_size": 63488 00:10:15.567 }, 00:10:15.567 { 00:10:15.567 "name": "pt4", 00:10:15.567 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:15.567 "is_configured": true, 00:10:15.567 "data_offset": 2048, 00:10:15.567 "data_size": 63488 00:10:15.567 } 00:10:15.567 ] 00:10:15.567 }' 00:10:15.568 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.568 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.828 [2024-11-27 21:42:38.887637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:15.828 [2024-11-27 21:42:38.887705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.828 [2024-11-27 21:42:38.887840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.828 [2024-11-27 21:42:38.887948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.828 [2024-11-27 21:42:38.888015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.828 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.087 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.088 [2024-11-27 21:42:38.987432] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.088 [2024-11-27 21:42:38.987485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.088 [2024-11-27 21:42:38.987517] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:16.088 [2024-11-27 21:42:38.987528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.088 [2024-11-27 21:42:38.989713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.088 [2024-11-27 21:42:38.989752] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.088 [2024-11-27 21:42:38.989831] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:16.088 [2024-11-27 21:42:38.989868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.088 pt2 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.088 21:42:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.088 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.088 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.088 "name": "raid_bdev1", 00:10:16.088 "uuid": "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b", 00:10:16.088 "strip_size_kb": 0, 00:10:16.088 "state": "configuring", 00:10:16.088 "raid_level": "raid1", 00:10:16.088 "superblock": true, 00:10:16.088 "num_base_bdevs": 4, 00:10:16.088 "num_base_bdevs_discovered": 1, 00:10:16.088 "num_base_bdevs_operational": 3, 00:10:16.088 "base_bdevs_list": [ 00:10:16.088 { 00:10:16.088 "name": null, 00:10:16.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.088 "is_configured": false, 00:10:16.088 "data_offset": 2048, 00:10:16.088 "data_size": 63488 00:10:16.088 }, 00:10:16.088 { 00:10:16.088 "name": "pt2", 00:10:16.088 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.088 "is_configured": true, 00:10:16.088 "data_offset": 2048, 00:10:16.088 "data_size": 63488 00:10:16.088 }, 00:10:16.088 { 00:10:16.088 "name": null, 00:10:16.088 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.088 "is_configured": false, 00:10:16.088 "data_offset": 2048, 00:10:16.088 "data_size": 63488 00:10:16.088 }, 00:10:16.088 { 00:10:16.088 "name": null, 00:10:16.088 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.088 "is_configured": false, 00:10:16.088 "data_offset": 2048, 00:10:16.088 "data_size": 63488 00:10:16.088 } 00:10:16.088 ] 00:10:16.088 }' 00:10:16.088 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.088 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.346 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:16.346 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:16.346 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.347 [2024-11-27 21:42:39.406771] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:16.347 [2024-11-27 21:42:39.406919] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.347 [2024-11-27 21:42:39.406962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:16.347 [2024-11-27 21:42:39.407008] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.347 [2024-11-27 21:42:39.407449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.347 [2024-11-27 21:42:39.407506] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:16.347 [2024-11-27 21:42:39.407629] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:16.347 [2024-11-27 21:42:39.407693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:16.347 pt3 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.347 "name": "raid_bdev1", 00:10:16.347 "uuid": "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b", 00:10:16.347 "strip_size_kb": 0, 00:10:16.347 "state": "configuring", 00:10:16.347 "raid_level": "raid1", 00:10:16.347 "superblock": true, 00:10:16.347 "num_base_bdevs": 4, 00:10:16.347 "num_base_bdevs_discovered": 2, 00:10:16.347 "num_base_bdevs_operational": 3, 00:10:16.347 "base_bdevs_list": [ 00:10:16.347 { 00:10:16.347 "name": null, 00:10:16.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.347 "is_configured": false, 00:10:16.347 "data_offset": 2048, 00:10:16.347 "data_size": 63488 00:10:16.347 }, 00:10:16.347 { 00:10:16.347 "name": "pt2", 00:10:16.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.347 "is_configured": true, 00:10:16.347 "data_offset": 2048, 00:10:16.347 "data_size": 63488 00:10:16.347 }, 00:10:16.347 { 00:10:16.347 "name": "pt3", 00:10:16.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.347 "is_configured": true, 00:10:16.347 "data_offset": 2048, 00:10:16.347 "data_size": 63488 00:10:16.347 }, 00:10:16.347 { 00:10:16.347 "name": null, 00:10:16.347 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.347 "is_configured": false, 00:10:16.347 "data_offset": 2048, 00:10:16.347 "data_size": 63488 00:10:16.347 } 00:10:16.347 ] 00:10:16.347 }' 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.347 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.914 [2024-11-27 21:42:39.853957] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:16.914 [2024-11-27 21:42:39.854024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.914 [2024-11-27 21:42:39.854045] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:16.914 [2024-11-27 21:42:39.854055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.914 [2024-11-27 21:42:39.854463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.914 [2024-11-27 21:42:39.854483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:16.914 [2024-11-27 21:42:39.854557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:16.914 [2024-11-27 21:42:39.854580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:16.914 [2024-11-27 21:42:39.854676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:10:16.914 [2024-11-27 21:42:39.854686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:16.914 [2024-11-27 21:42:39.855004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:10:16.914 [2024-11-27 21:42:39.855143] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:10:16.914 [2024-11-27 21:42:39.855152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:10:16.914 [2024-11-27 21:42:39.855303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.914 pt4 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.914 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.915 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.915 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.915 "name": "raid_bdev1", 00:10:16.915 "uuid": "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b", 00:10:16.915 "strip_size_kb": 0, 00:10:16.915 "state": "online", 00:10:16.915 "raid_level": "raid1", 00:10:16.915 "superblock": true, 00:10:16.915 "num_base_bdevs": 4, 00:10:16.915 "num_base_bdevs_discovered": 3, 00:10:16.915 "num_base_bdevs_operational": 3, 00:10:16.915 "base_bdevs_list": [ 00:10:16.915 { 00:10:16.915 "name": null, 00:10:16.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.915 "is_configured": false, 00:10:16.915 "data_offset": 2048, 00:10:16.915 "data_size": 63488 00:10:16.915 }, 00:10:16.915 { 00:10:16.915 "name": "pt2", 00:10:16.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.915 "is_configured": true, 00:10:16.915 "data_offset": 2048, 00:10:16.915 "data_size": 63488 00:10:16.915 }, 00:10:16.915 { 00:10:16.915 "name": "pt3", 00:10:16.915 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.915 "is_configured": true, 00:10:16.915 "data_offset": 2048, 00:10:16.915 "data_size": 63488 00:10:16.915 }, 00:10:16.915 { 00:10:16.915 "name": "pt4", 00:10:16.915 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:16.915 "is_configured": true, 00:10:16.915 "data_offset": 2048, 00:10:16.915 "data_size": 63488 00:10:16.915 } 00:10:16.915 ] 00:10:16.915 }' 00:10:16.915 21:42:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.915 21:42:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.175 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.175 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.176 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.176 [2024-11-27 21:42:40.253281] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.176 [2024-11-27 21:42:40.253365] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.176 [2024-11-27 21:42:40.253488] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.176 [2024-11-27 21:42:40.253632] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.176 [2024-11-27 21:42:40.253680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:10:17.176 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.176 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.176 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:17.176 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.176 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.176 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.470 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:17.470 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:17.470 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:10:17.470 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:10:17.470 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:10:17.470 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.470 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.470 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.470 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.471 [2024-11-27 21:42:40.325150] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:17.471 [2024-11-27 21:42:40.325239] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.471 [2024-11-27 21:42:40.325274] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:17.471 [2024-11-27 21:42:40.325302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.471 [2024-11-27 21:42:40.327598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.471 [2024-11-27 21:42:40.327668] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:17.471 [2024-11-27 21:42:40.327787] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:17.471 [2024-11-27 21:42:40.327884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:17.471 [2024-11-27 21:42:40.328053] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:17.471 [2024-11-27 21:42:40.328134] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.471 [2024-11-27 21:42:40.328201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:10:17.471 [2024-11-27 21:42:40.328294] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:17.471 [2024-11-27 21:42:40.328447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:17.471 pt1 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.471 "name": "raid_bdev1", 00:10:17.471 "uuid": "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b", 00:10:17.471 "strip_size_kb": 0, 00:10:17.471 "state": "configuring", 00:10:17.471 "raid_level": "raid1", 00:10:17.471 "superblock": true, 00:10:17.471 "num_base_bdevs": 4, 00:10:17.471 "num_base_bdevs_discovered": 2, 00:10:17.471 "num_base_bdevs_operational": 3, 00:10:17.471 "base_bdevs_list": [ 00:10:17.471 { 00:10:17.471 "name": null, 00:10:17.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:17.471 "is_configured": false, 00:10:17.471 "data_offset": 2048, 00:10:17.471 "data_size": 63488 00:10:17.471 }, 00:10:17.471 { 00:10:17.471 "name": "pt2", 00:10:17.471 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.471 "is_configured": true, 00:10:17.471 "data_offset": 2048, 00:10:17.471 "data_size": 63488 00:10:17.471 }, 00:10:17.471 { 00:10:17.471 "name": "pt3", 00:10:17.471 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.471 "is_configured": true, 00:10:17.471 "data_offset": 2048, 00:10:17.471 "data_size": 63488 00:10:17.471 }, 00:10:17.471 { 00:10:17.471 "name": null, 00:10:17.471 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:17.471 "is_configured": false, 00:10:17.471 "data_offset": 2048, 00:10:17.471 "data_size": 63488 00:10:17.471 } 00:10:17.471 ] 00:10:17.471 }' 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.471 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.746 [2024-11-27 21:42:40.804353] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:17.746 [2024-11-27 21:42:40.804473] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.746 [2024-11-27 21:42:40.804512] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:10:17.746 [2024-11-27 21:42:40.804549] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.746 [2024-11-27 21:42:40.805028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.746 [2024-11-27 21:42:40.805093] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:17.746 [2024-11-27 21:42:40.805208] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:17.746 [2024-11-27 21:42:40.805267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:17.746 [2024-11-27 21:42:40.805382] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:10:17.746 [2024-11-27 21:42:40.805394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.746 [2024-11-27 21:42:40.805647] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:10:17.746 [2024-11-27 21:42:40.805775] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:10:17.746 [2024-11-27 21:42:40.805785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:10:17.746 [2024-11-27 21:42:40.805932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.746 pt4 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.746 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.009 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.009 "name": "raid_bdev1", 00:10:18.009 "uuid": "a6d14ed0-e502-4f6d-a995-eaa53e0fe13b", 00:10:18.009 "strip_size_kb": 0, 00:10:18.009 "state": "online", 00:10:18.009 "raid_level": "raid1", 00:10:18.009 "superblock": true, 00:10:18.009 "num_base_bdevs": 4, 00:10:18.009 "num_base_bdevs_discovered": 3, 00:10:18.009 "num_base_bdevs_operational": 3, 00:10:18.009 "base_bdevs_list": [ 00:10:18.009 { 00:10:18.009 "name": null, 00:10:18.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.009 "is_configured": false, 00:10:18.009 "data_offset": 2048, 00:10:18.009 "data_size": 63488 00:10:18.009 }, 00:10:18.009 { 00:10:18.009 "name": "pt2", 00:10:18.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.009 "is_configured": true, 00:10:18.009 "data_offset": 2048, 00:10:18.009 "data_size": 63488 00:10:18.009 }, 00:10:18.009 { 00:10:18.009 "name": "pt3", 00:10:18.009 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.009 "is_configured": true, 00:10:18.009 "data_offset": 2048, 00:10:18.009 "data_size": 63488 00:10:18.009 }, 00:10:18.009 { 00:10:18.009 "name": "pt4", 00:10:18.009 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:18.009 "is_configured": true, 00:10:18.009 "data_offset": 2048, 00:10:18.009 "data_size": 63488 00:10:18.009 } 00:10:18.009 ] 00:10:18.009 }' 00:10:18.009 21:42:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.009 21:42:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.269 21:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:18.269 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.269 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.269 21:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:18.269 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:18.529 [2024-11-27 21:42:41.399627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' a6d14ed0-e502-4f6d-a995-eaa53e0fe13b '!=' a6d14ed0-e502-4f6d-a995-eaa53e0fe13b ']' 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84990 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84990 ']' 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84990 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84990 00:10:18.529 killing process with pid 84990 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84990' 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 84990 00:10:18.529 [2024-11-27 21:42:41.484591] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.529 [2024-11-27 21:42:41.484688] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.529 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 84990 00:10:18.529 [2024-11-27 21:42:41.484768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.529 [2024-11-27 21:42:41.484778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:10:18.529 [2024-11-27 21:42:41.528570] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:18.788 21:42:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:18.788 ************************************ 00:10:18.788 END TEST raid_superblock_test 00:10:18.788 ************************************ 00:10:18.788 00:10:18.788 real 0m7.199s 00:10:18.788 user 0m12.188s 00:10:18.788 sys 0m1.449s 00:10:18.788 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.788 21:42:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.788 21:42:41 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:10:18.788 21:42:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:18.788 21:42:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.788 21:42:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:18.788 ************************************ 00:10:18.788 START TEST raid_read_error_test 00:10:18.788 ************************************ 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LylWkteTXN 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85463 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85463 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 85463 ']' 00:10:18.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.788 21:42:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.047 [2024-11-27 21:42:41.918767] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:10:19.047 [2024-11-27 21:42:41.918914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85463 ] 00:10:19.047 [2024-11-27 21:42:42.072623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.047 [2024-11-27 21:42:42.097459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.047 [2024-11-27 21:42:42.140035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.047 [2024-11-27 21:42:42.140142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.985 BaseBdev1_malloc 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.985 true 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.985 [2024-11-27 21:42:42.775413] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:19.985 [2024-11-27 21:42:42.775471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.985 [2024-11-27 21:42:42.775509] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:19.985 [2024-11-27 21:42:42.775518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.985 [2024-11-27 21:42:42.777675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.985 [2024-11-27 21:42:42.777711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:19.985 BaseBdev1 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.985 BaseBdev2_malloc 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.985 true 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.985 [2024-11-27 21:42:42.815903] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:19.985 [2024-11-27 21:42:42.815947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.985 [2024-11-27 21:42:42.815980] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:19.985 [2024-11-27 21:42:42.815996] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.985 [2024-11-27 21:42:42.818065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.985 [2024-11-27 21:42:42.818101] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:19.985 BaseBdev2 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.985 BaseBdev3_malloc 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.985 true 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.985 [2024-11-27 21:42:42.856438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:19.985 [2024-11-27 21:42:42.856483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.985 [2024-11-27 21:42:42.856502] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:19.985 [2024-11-27 21:42:42.856510] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.985 [2024-11-27 21:42:42.858629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.985 [2024-11-27 21:42:42.858665] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:19.985 BaseBdev3 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.985 BaseBdev4_malloc 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.985 true 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.985 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.985 [2024-11-27 21:42:42.904719] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:19.985 [2024-11-27 21:42:42.904764] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.985 [2024-11-27 21:42:42.904784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:19.985 [2024-11-27 21:42:42.904793] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.986 [2024-11-27 21:42:42.906762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.986 [2024-11-27 21:42:42.906805] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:19.986 BaseBdev4 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.986 [2024-11-27 21:42:42.916737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.986 [2024-11-27 21:42:42.918489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.986 [2024-11-27 21:42:42.918609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.986 [2024-11-27 21:42:42.918664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:19.986 [2024-11-27 21:42:42.918879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:19.986 [2024-11-27 21:42:42.918892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:19.986 [2024-11-27 21:42:42.919142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:10:19.986 [2024-11-27 21:42:42.919280] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:19.986 [2024-11-27 21:42:42.919292] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:19.986 [2024-11-27 21:42:42.919405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.986 "name": "raid_bdev1", 00:10:19.986 "uuid": "3efca3b9-4891-4d2c-be7a-1839686fcc70", 00:10:19.986 "strip_size_kb": 0, 00:10:19.986 "state": "online", 00:10:19.986 "raid_level": "raid1", 00:10:19.986 "superblock": true, 00:10:19.986 "num_base_bdevs": 4, 00:10:19.986 "num_base_bdevs_discovered": 4, 00:10:19.986 "num_base_bdevs_operational": 4, 00:10:19.986 "base_bdevs_list": [ 00:10:19.986 { 00:10:19.986 "name": "BaseBdev1", 00:10:19.986 "uuid": "d4785de1-190a-520c-a3ed-3447fcbd2ddb", 00:10:19.986 "is_configured": true, 00:10:19.986 "data_offset": 2048, 00:10:19.986 "data_size": 63488 00:10:19.986 }, 00:10:19.986 { 00:10:19.986 "name": "BaseBdev2", 00:10:19.986 "uuid": "592bc8bd-3cef-5fd5-8d9b-ea1f7f2a7efe", 00:10:19.986 "is_configured": true, 00:10:19.986 "data_offset": 2048, 00:10:19.986 "data_size": 63488 00:10:19.986 }, 00:10:19.986 { 00:10:19.986 "name": "BaseBdev3", 00:10:19.986 "uuid": "cdc5611d-def1-5d8d-a856-db8bdbfd446b", 00:10:19.986 "is_configured": true, 00:10:19.986 "data_offset": 2048, 00:10:19.986 "data_size": 63488 00:10:19.986 }, 00:10:19.986 { 00:10:19.986 "name": "BaseBdev4", 00:10:19.986 "uuid": "d0ea0db0-874a-500b-aac1-bc709321afda", 00:10:19.986 "is_configured": true, 00:10:19.986 "data_offset": 2048, 00:10:19.986 "data_size": 63488 00:10:19.986 } 00:10:19.986 ] 00:10:19.986 }' 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.986 21:42:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.245 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:20.245 21:42:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:20.505 [2024-11-27 21:42:43.432278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.445 "name": "raid_bdev1", 00:10:21.445 "uuid": "3efca3b9-4891-4d2c-be7a-1839686fcc70", 00:10:21.445 "strip_size_kb": 0, 00:10:21.445 "state": "online", 00:10:21.445 "raid_level": "raid1", 00:10:21.445 "superblock": true, 00:10:21.445 "num_base_bdevs": 4, 00:10:21.445 "num_base_bdevs_discovered": 4, 00:10:21.445 "num_base_bdevs_operational": 4, 00:10:21.445 "base_bdevs_list": [ 00:10:21.445 { 00:10:21.445 "name": "BaseBdev1", 00:10:21.445 "uuid": "d4785de1-190a-520c-a3ed-3447fcbd2ddb", 00:10:21.445 "is_configured": true, 00:10:21.445 "data_offset": 2048, 00:10:21.445 "data_size": 63488 00:10:21.445 }, 00:10:21.445 { 00:10:21.445 "name": "BaseBdev2", 00:10:21.445 "uuid": "592bc8bd-3cef-5fd5-8d9b-ea1f7f2a7efe", 00:10:21.445 "is_configured": true, 00:10:21.445 "data_offset": 2048, 00:10:21.445 "data_size": 63488 00:10:21.445 }, 00:10:21.445 { 00:10:21.445 "name": "BaseBdev3", 00:10:21.445 "uuid": "cdc5611d-def1-5d8d-a856-db8bdbfd446b", 00:10:21.445 "is_configured": true, 00:10:21.445 "data_offset": 2048, 00:10:21.445 "data_size": 63488 00:10:21.445 }, 00:10:21.445 { 00:10:21.445 "name": "BaseBdev4", 00:10:21.445 "uuid": "d0ea0db0-874a-500b-aac1-bc709321afda", 00:10:21.445 "is_configured": true, 00:10:21.445 "data_offset": 2048, 00:10:21.445 "data_size": 63488 00:10:21.445 } 00:10:21.445 ] 00:10:21.445 }' 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.445 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.705 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:21.705 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.705 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.705 [2024-11-27 21:42:44.788036] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.705 [2024-11-27 21:42:44.788139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.705 [2024-11-27 21:42:44.790911] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.705 [2024-11-27 21:42:44.791012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.705 [2024-11-27 21:42:44.791200] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.705 [2024-11-27 21:42:44.791251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:21.705 { 00:10:21.705 "results": [ 00:10:21.705 { 00:10:21.705 "job": "raid_bdev1", 00:10:21.705 "core_mask": "0x1", 00:10:21.705 "workload": "randrw", 00:10:21.705 "percentage": 50, 00:10:21.705 "status": "finished", 00:10:21.705 "queue_depth": 1, 00:10:21.705 "io_size": 131072, 00:10:21.705 "runtime": 1.356855, 00:10:21.705 "iops": 11064.55737717, 00:10:21.705 "mibps": 1383.06967214625, 00:10:21.705 "io_failed": 0, 00:10:21.705 "io_timeout": 0, 00:10:21.705 "avg_latency_us": 87.61785573318262, 00:10:21.705 "min_latency_us": 24.370305676855896, 00:10:21.705 "max_latency_us": 1559.6995633187773 00:10:21.705 } 00:10:21.705 ], 00:10:21.705 "core_count": 1 00:10:21.705 } 00:10:21.705 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.705 21:42:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85463 00:10:21.705 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 85463 ']' 00:10:21.705 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 85463 00:10:21.705 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:21.705 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.705 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85463 00:10:21.965 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.965 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.965 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85463' 00:10:21.965 killing process with pid 85463 00:10:21.965 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 85463 00:10:21.965 [2024-11-27 21:42:44.835354] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.965 21:42:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 85463 00:10:21.965 [2024-11-27 21:42:44.871438] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:21.965 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:21.965 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LylWkteTXN 00:10:21.965 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:21.966 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:21.966 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:21.966 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:21.966 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:21.966 21:42:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:21.966 00:10:21.966 real 0m3.270s 00:10:21.966 user 0m4.109s 00:10:21.966 sys 0m0.530s 00:10:21.966 21:42:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.966 21:42:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.966 ************************************ 00:10:21.966 END TEST raid_read_error_test 00:10:21.966 ************************************ 00:10:22.225 21:42:45 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:10:22.225 21:42:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:22.225 21:42:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:22.225 21:42:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:22.225 ************************************ 00:10:22.225 START TEST raid_write_error_test 00:10:22.225 ************************************ 00:10:22.225 21:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:10:22.225 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:22.225 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:22.225 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:22.225 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:22.225 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.225 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:22.225 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.225 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.225 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:22.225 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.225 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.225 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:22.225 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wLHhkc7rkR 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=85598 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 85598 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 85598 ']' 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:22.226 21:42:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.226 [2024-11-27 21:42:45.235695] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:10:22.226 [2024-11-27 21:42:45.235940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85598 ] 00:10:22.485 [2024-11-27 21:42:45.389849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.485 [2024-11-27 21:42:45.416066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.485 [2024-11-27 21:42:45.460194] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:22.485 [2024-11-27 21:42:45.460225] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.054 BaseBdev1_malloc 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.054 true 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.054 [2024-11-27 21:42:46.096005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:23.054 [2024-11-27 21:42:46.096103] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.054 [2024-11-27 21:42:46.096165] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006980 00:10:23.054 [2024-11-27 21:42:46.096192] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.054 [2024-11-27 21:42:46.098479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.054 [2024-11-27 21:42:46.098516] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:23.054 BaseBdev1 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.054 BaseBdev2_malloc 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.054 true 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.054 [2024-11-27 21:42:46.137035] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:23.054 [2024-11-27 21:42:46.137120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.054 [2024-11-27 21:42:46.137159] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:10:23.054 [2024-11-27 21:42:46.137179] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.054 [2024-11-27 21:42:46.139314] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.054 [2024-11-27 21:42:46.139353] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:23.054 BaseBdev2 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:23.054 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.055 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.055 BaseBdev3_malloc 00:10:23.055 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.055 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:23.055 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.055 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.055 true 00:10:23.055 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.055 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:23.055 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.055 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.315 [2024-11-27 21:42:46.177640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:23.315 [2024-11-27 21:42:46.177687] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.315 [2024-11-27 21:42:46.177723] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:10:23.315 [2024-11-27 21:42:46.177732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.315 [2024-11-27 21:42:46.179939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.315 [2024-11-27 21:42:46.180010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:23.315 BaseBdev3 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.315 BaseBdev4_malloc 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.315 true 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.315 [2024-11-27 21:42:46.229582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:23.315 [2024-11-27 21:42:46.229628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:23.315 [2024-11-27 21:42:46.229650] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:23.315 [2024-11-27 21:42:46.229658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:23.315 [2024-11-27 21:42:46.231717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:23.315 [2024-11-27 21:42:46.231752] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:23.315 BaseBdev4 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.315 [2024-11-27 21:42:46.241594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.315 [2024-11-27 21:42:46.243415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:23.315 [2024-11-27 21:42:46.243581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.315 [2024-11-27 21:42:46.243640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:23.315 [2024-11-27 21:42:46.243856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002000 00:10:23.315 [2024-11-27 21:42:46.243870] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:23.315 [2024-11-27 21:42:46.244124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002ef0 00:10:23.315 [2024-11-27 21:42:46.244270] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002000 00:10:23.315 [2024-11-27 21:42:46.244283] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002000 00:10:23.315 [2024-11-27 21:42:46.244394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.315 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.316 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.316 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.316 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.316 "name": "raid_bdev1", 00:10:23.316 "uuid": "29660b25-0747-4838-8d58-4d6d8edec9f0", 00:10:23.316 "strip_size_kb": 0, 00:10:23.316 "state": "online", 00:10:23.316 "raid_level": "raid1", 00:10:23.316 "superblock": true, 00:10:23.316 "num_base_bdevs": 4, 00:10:23.316 "num_base_bdevs_discovered": 4, 00:10:23.316 "num_base_bdevs_operational": 4, 00:10:23.316 "base_bdevs_list": [ 00:10:23.316 { 00:10:23.316 "name": "BaseBdev1", 00:10:23.316 "uuid": "6a462479-1b44-57cd-b5ee-45fa1f4ef65e", 00:10:23.316 "is_configured": true, 00:10:23.316 "data_offset": 2048, 00:10:23.316 "data_size": 63488 00:10:23.316 }, 00:10:23.316 { 00:10:23.316 "name": "BaseBdev2", 00:10:23.316 "uuid": "815d7e66-92fd-5da2-9894-60a4afbba978", 00:10:23.316 "is_configured": true, 00:10:23.316 "data_offset": 2048, 00:10:23.316 "data_size": 63488 00:10:23.316 }, 00:10:23.316 { 00:10:23.316 "name": "BaseBdev3", 00:10:23.316 "uuid": "dee72127-7813-513a-bba2-4e1ad73a8e18", 00:10:23.316 "is_configured": true, 00:10:23.316 "data_offset": 2048, 00:10:23.316 "data_size": 63488 00:10:23.316 }, 00:10:23.316 { 00:10:23.316 "name": "BaseBdev4", 00:10:23.316 "uuid": "451b8143-8d37-5d6a-9d45-4957013a4315", 00:10:23.316 "is_configured": true, 00:10:23.316 "data_offset": 2048, 00:10:23.316 "data_size": 63488 00:10:23.316 } 00:10:23.316 ] 00:10:23.316 }' 00:10:23.316 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.316 21:42:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.575 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:23.575 21:42:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:23.835 [2024-11-27 21:42:46.789075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000003090 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.775 [2024-11-27 21:42:47.704517] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:24.775 [2024-11-27 21:42:47.704652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:24.775 [2024-11-27 21:42:47.704943] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000003090 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.775 "name": "raid_bdev1", 00:10:24.775 "uuid": "29660b25-0747-4838-8d58-4d6d8edec9f0", 00:10:24.775 "strip_size_kb": 0, 00:10:24.775 "state": "online", 00:10:24.775 "raid_level": "raid1", 00:10:24.775 "superblock": true, 00:10:24.775 "num_base_bdevs": 4, 00:10:24.775 "num_base_bdevs_discovered": 3, 00:10:24.775 "num_base_bdevs_operational": 3, 00:10:24.775 "base_bdevs_list": [ 00:10:24.775 { 00:10:24.775 "name": null, 00:10:24.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.775 "is_configured": false, 00:10:24.775 "data_offset": 0, 00:10:24.775 "data_size": 63488 00:10:24.775 }, 00:10:24.775 { 00:10:24.775 "name": "BaseBdev2", 00:10:24.775 "uuid": "815d7e66-92fd-5da2-9894-60a4afbba978", 00:10:24.775 "is_configured": true, 00:10:24.775 "data_offset": 2048, 00:10:24.775 "data_size": 63488 00:10:24.775 }, 00:10:24.775 { 00:10:24.775 "name": "BaseBdev3", 00:10:24.775 "uuid": "dee72127-7813-513a-bba2-4e1ad73a8e18", 00:10:24.775 "is_configured": true, 00:10:24.775 "data_offset": 2048, 00:10:24.775 "data_size": 63488 00:10:24.775 }, 00:10:24.775 { 00:10:24.775 "name": "BaseBdev4", 00:10:24.775 "uuid": "451b8143-8d37-5d6a-9d45-4957013a4315", 00:10:24.775 "is_configured": true, 00:10:24.775 "data_offset": 2048, 00:10:24.775 "data_size": 63488 00:10:24.775 } 00:10:24.775 ] 00:10:24.775 }' 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.775 21:42:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.034 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:25.034 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.034 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.034 [2024-11-27 21:42:48.099852] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:25.034 [2024-11-27 21:42:48.099946] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:25.034 [2024-11-27 21:42:48.102618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:25.034 [2024-11-27 21:42:48.102734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.035 [2024-11-27 21:42:48.102892] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:25.035 [2024-11-27 21:42:48.102960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state offline 00:10:25.035 { 00:10:25.035 "results": [ 00:10:25.035 { 00:10:25.035 "job": "raid_bdev1", 00:10:25.035 "core_mask": "0x1", 00:10:25.035 "workload": "randrw", 00:10:25.035 "percentage": 50, 00:10:25.035 "status": "finished", 00:10:25.035 "queue_depth": 1, 00:10:25.035 "io_size": 131072, 00:10:25.035 "runtime": 1.311594, 00:10:25.035 "iops": 11998.377546710339, 00:10:25.035 "mibps": 1499.7971933387923, 00:10:25.035 "io_failed": 0, 00:10:25.035 "io_timeout": 0, 00:10:25.035 "avg_latency_us": 80.65395905901954, 00:10:25.035 "min_latency_us": 24.482096069868994, 00:10:25.035 "max_latency_us": 1423.7624454148472 00:10:25.035 } 00:10:25.035 ], 00:10:25.035 "core_count": 1 00:10:25.035 } 00:10:25.035 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.035 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 85598 00:10:25.035 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 85598 ']' 00:10:25.035 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 85598 00:10:25.035 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:25.035 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:25.035 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85598 00:10:25.035 killing process with pid 85598 00:10:25.035 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:25.035 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:25.035 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85598' 00:10:25.035 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 85598 00:10:25.035 [2024-11-27 21:42:48.145640] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:25.035 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 85598 00:10:25.293 [2024-11-27 21:42:48.181967] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.293 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wLHhkc7rkR 00:10:25.293 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:25.293 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:25.293 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:25.293 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:25.293 ************************************ 00:10:25.293 END TEST raid_write_error_test 00:10:25.293 ************************************ 00:10:25.293 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.293 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:25.293 21:42:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:25.293 00:10:25.293 real 0m3.257s 00:10:25.293 user 0m4.096s 00:10:25.293 sys 0m0.534s 00:10:25.293 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.293 21:42:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.551 21:42:48 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:10:25.551 21:42:48 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:10:25.551 21:42:48 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:10:25.551 21:42:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:25.551 21:42:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.551 21:42:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.551 ************************************ 00:10:25.551 START TEST raid_rebuild_test 00:10:25.551 ************************************ 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=85725 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 85725 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 85725 ']' 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.551 21:42:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.551 [2024-11-27 21:42:48.564039] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:10:25.551 [2024-11-27 21:42:48.564244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85725 ] 00:10:25.551 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:25.551 Zero copy mechanism will not be used. 00:10:25.808 [2024-11-27 21:42:48.716572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.808 [2024-11-27 21:42:48.742927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.808 [2024-11-27 21:42:48.785242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.808 [2024-11-27 21:42:48.785344] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.375 BaseBdev1_malloc 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.375 [2024-11-27 21:42:49.416586] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:26.375 [2024-11-27 21:42:49.416645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.375 [2024-11-27 21:42:49.416670] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:26.375 [2024-11-27 21:42:49.416683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.375 [2024-11-27 21:42:49.418785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.375 [2024-11-27 21:42:49.418839] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:26.375 BaseBdev1 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.375 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.375 BaseBdev2_malloc 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.376 [2024-11-27 21:42:49.445057] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:26.376 [2024-11-27 21:42:49.445112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.376 [2024-11-27 21:42:49.445135] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:26.376 [2024-11-27 21:42:49.445143] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.376 [2024-11-27 21:42:49.447195] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.376 [2024-11-27 21:42:49.447236] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:26.376 BaseBdev2 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.376 spare_malloc 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.376 spare_delay 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.376 [2024-11-27 21:42:49.485512] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:26.376 [2024-11-27 21:42:49.485561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:26.376 [2024-11-27 21:42:49.485580] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:26.376 [2024-11-27 21:42:49.485588] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:26.376 [2024-11-27 21:42:49.487789] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:26.376 [2024-11-27 21:42:49.487830] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:26.376 spare 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.376 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.635 [2024-11-27 21:42:49.497525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.635 [2024-11-27 21:42:49.499480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:26.635 [2024-11-27 21:42:49.499572] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:26.635 [2024-11-27 21:42:49.499583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:26.635 [2024-11-27 21:42:49.499914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:26.635 [2024-11-27 21:42:49.500055] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:26.635 [2024-11-27 21:42:49.500071] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:26.636 [2024-11-27 21:42:49.500212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.636 "name": "raid_bdev1", 00:10:26.636 "uuid": "6060773a-f5ac-4e92-b32a-c23e46bbdcf2", 00:10:26.636 "strip_size_kb": 0, 00:10:26.636 "state": "online", 00:10:26.636 "raid_level": "raid1", 00:10:26.636 "superblock": false, 00:10:26.636 "num_base_bdevs": 2, 00:10:26.636 "num_base_bdevs_discovered": 2, 00:10:26.636 "num_base_bdevs_operational": 2, 00:10:26.636 "base_bdevs_list": [ 00:10:26.636 { 00:10:26.636 "name": "BaseBdev1", 00:10:26.636 "uuid": "52ce58e0-cca4-59be-b762-789d599439a8", 00:10:26.636 "is_configured": true, 00:10:26.636 "data_offset": 0, 00:10:26.636 "data_size": 65536 00:10:26.636 }, 00:10:26.636 { 00:10:26.636 "name": "BaseBdev2", 00:10:26.636 "uuid": "84257b9a-96f1-5916-9217-d94b60ab31bc", 00:10:26.636 "is_configured": true, 00:10:26.636 "data_offset": 0, 00:10:26.636 "data_size": 65536 00:10:26.636 } 00:10:26.636 ] 00:10:26.636 }' 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.636 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.894 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:26.894 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.894 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.895 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:26.895 [2024-11-27 21:42:49.945073] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:26.895 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.895 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:10:26.895 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.895 21:42:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:26.895 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.895 21:42:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.895 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.153 21:42:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:10:27.153 21:42:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:27.153 21:42:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:27.153 21:42:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:27.153 21:42:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:27.153 21:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:27.153 21:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:27.153 21:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:27.153 21:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:27.153 21:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:27.153 21:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:27.153 21:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:27.153 21:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:27.153 21:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:27.153 [2024-11-27 21:42:50.228383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:27.153 /dev/nbd0 00:10:27.412 21:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:27.412 21:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:27.412 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:27.412 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:10:27.412 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:27.412 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:27.412 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:27.412 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:10:27.412 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:27.413 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:27.413 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:27.413 1+0 records in 00:10:27.413 1+0 records out 00:10:27.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359432 s, 11.4 MB/s 00:10:27.413 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.413 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:10:27.413 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:27.413 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:27.413 21:42:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:10:27.413 21:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:27.413 21:42:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:27.413 21:42:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:27.413 21:42:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:27.413 21:42:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:10:31.608 65536+0 records in 00:10:31.608 65536+0 records out 00:10:31.608 33554432 bytes (34 MB, 32 MiB) copied, 3.75087 s, 8.9 MB/s 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:31.608 [2024-11-27 21:42:54.273226] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.608 [2024-11-27 21:42:54.309275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.608 "name": "raid_bdev1", 00:10:31.608 "uuid": "6060773a-f5ac-4e92-b32a-c23e46bbdcf2", 00:10:31.608 "strip_size_kb": 0, 00:10:31.608 "state": "online", 00:10:31.608 "raid_level": "raid1", 00:10:31.608 "superblock": false, 00:10:31.608 "num_base_bdevs": 2, 00:10:31.608 "num_base_bdevs_discovered": 1, 00:10:31.608 "num_base_bdevs_operational": 1, 00:10:31.608 "base_bdevs_list": [ 00:10:31.608 { 00:10:31.608 "name": null, 00:10:31.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.608 "is_configured": false, 00:10:31.608 "data_offset": 0, 00:10:31.608 "data_size": 65536 00:10:31.608 }, 00:10:31.608 { 00:10:31.608 "name": "BaseBdev2", 00:10:31.608 "uuid": "84257b9a-96f1-5916-9217-d94b60ab31bc", 00:10:31.608 "is_configured": true, 00:10:31.608 "data_offset": 0, 00:10:31.608 "data_size": 65536 00:10:31.608 } 00:10:31.608 ] 00:10:31.608 }' 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.608 21:42:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.868 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:31.868 21:42:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.868 21:42:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.868 [2024-11-27 21:42:54.784475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:31.868 [2024-11-27 21:42:54.789473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06220 00:10:31.868 21:42:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.868 21:42:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:31.868 [2024-11-27 21:42:54.791453] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:32.806 21:42:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:32.806 21:42:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:32.806 21:42:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:32.806 21:42:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:32.806 21:42:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:32.806 21:42:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.806 21:42:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.806 21:42:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.806 21:42:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.806 21:42:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.806 21:42:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:32.806 "name": "raid_bdev1", 00:10:32.806 "uuid": "6060773a-f5ac-4e92-b32a-c23e46bbdcf2", 00:10:32.806 "strip_size_kb": 0, 00:10:32.806 "state": "online", 00:10:32.806 "raid_level": "raid1", 00:10:32.806 "superblock": false, 00:10:32.806 "num_base_bdevs": 2, 00:10:32.806 "num_base_bdevs_discovered": 2, 00:10:32.806 "num_base_bdevs_operational": 2, 00:10:32.806 "process": { 00:10:32.806 "type": "rebuild", 00:10:32.806 "target": "spare", 00:10:32.806 "progress": { 00:10:32.806 "blocks": 20480, 00:10:32.806 "percent": 31 00:10:32.806 } 00:10:32.806 }, 00:10:32.806 "base_bdevs_list": [ 00:10:32.806 { 00:10:32.806 "name": "spare", 00:10:32.806 "uuid": "16f1abd8-7d19-5bee-94f1-06692c8ec7c9", 00:10:32.806 "is_configured": true, 00:10:32.806 "data_offset": 0, 00:10:32.806 "data_size": 65536 00:10:32.806 }, 00:10:32.806 { 00:10:32.806 "name": "BaseBdev2", 00:10:32.806 "uuid": "84257b9a-96f1-5916-9217-d94b60ab31bc", 00:10:32.806 "is_configured": true, 00:10:32.806 "data_offset": 0, 00:10:32.806 "data_size": 65536 00:10:32.806 } 00:10:32.806 ] 00:10:32.806 }' 00:10:32.806 21:42:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:32.806 21:42:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:32.806 21:42:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:33.066 21:42:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:33.066 21:42:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:33.066 21:42:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.066 21:42:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.066 [2024-11-27 21:42:55.935916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:33.066 [2024-11-27 21:42:55.996335] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:33.066 [2024-11-27 21:42:55.996397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:33.066 [2024-11-27 21:42:55.996417] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:33.066 [2024-11-27 21:42:55.996425] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.066 "name": "raid_bdev1", 00:10:33.066 "uuid": "6060773a-f5ac-4e92-b32a-c23e46bbdcf2", 00:10:33.066 "strip_size_kb": 0, 00:10:33.066 "state": "online", 00:10:33.066 "raid_level": "raid1", 00:10:33.066 "superblock": false, 00:10:33.066 "num_base_bdevs": 2, 00:10:33.066 "num_base_bdevs_discovered": 1, 00:10:33.066 "num_base_bdevs_operational": 1, 00:10:33.066 "base_bdevs_list": [ 00:10:33.066 { 00:10:33.066 "name": null, 00:10:33.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.066 "is_configured": false, 00:10:33.066 "data_offset": 0, 00:10:33.066 "data_size": 65536 00:10:33.066 }, 00:10:33.066 { 00:10:33.066 "name": "BaseBdev2", 00:10:33.066 "uuid": "84257b9a-96f1-5916-9217-d94b60ab31bc", 00:10:33.066 "is_configured": true, 00:10:33.066 "data_offset": 0, 00:10:33.066 "data_size": 65536 00:10:33.066 } 00:10:33.066 ] 00:10:33.066 }' 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.066 21:42:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.337 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:33.337 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:33.337 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:33.337 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:33.337 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:33.337 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.337 21:42:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.337 21:42:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.337 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.337 21:42:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.612 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:33.612 "name": "raid_bdev1", 00:10:33.612 "uuid": "6060773a-f5ac-4e92-b32a-c23e46bbdcf2", 00:10:33.612 "strip_size_kb": 0, 00:10:33.612 "state": "online", 00:10:33.612 "raid_level": "raid1", 00:10:33.612 "superblock": false, 00:10:33.612 "num_base_bdevs": 2, 00:10:33.612 "num_base_bdevs_discovered": 1, 00:10:33.612 "num_base_bdevs_operational": 1, 00:10:33.612 "base_bdevs_list": [ 00:10:33.612 { 00:10:33.612 "name": null, 00:10:33.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.612 "is_configured": false, 00:10:33.612 "data_offset": 0, 00:10:33.612 "data_size": 65536 00:10:33.612 }, 00:10:33.612 { 00:10:33.612 "name": "BaseBdev2", 00:10:33.612 "uuid": "84257b9a-96f1-5916-9217-d94b60ab31bc", 00:10:33.612 "is_configured": true, 00:10:33.612 "data_offset": 0, 00:10:33.612 "data_size": 65536 00:10:33.612 } 00:10:33.612 ] 00:10:33.612 }' 00:10:33.612 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:33.612 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:33.612 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:33.612 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:33.612 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:33.612 21:42:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.612 21:42:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.612 [2024-11-27 21:42:56.560473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:33.612 [2024-11-27 21:42:56.565361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d062f0 00:10:33.612 21:42:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.612 21:42:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:33.612 [2024-11-27 21:42:56.567287] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:34.551 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:34.551 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:34.551 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:34.551 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:34.551 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:34.551 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.551 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.551 21:42:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.551 21:42:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.551 21:42:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.551 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:34.551 "name": "raid_bdev1", 00:10:34.551 "uuid": "6060773a-f5ac-4e92-b32a-c23e46bbdcf2", 00:10:34.551 "strip_size_kb": 0, 00:10:34.551 "state": "online", 00:10:34.551 "raid_level": "raid1", 00:10:34.551 "superblock": false, 00:10:34.551 "num_base_bdevs": 2, 00:10:34.551 "num_base_bdevs_discovered": 2, 00:10:34.551 "num_base_bdevs_operational": 2, 00:10:34.551 "process": { 00:10:34.551 "type": "rebuild", 00:10:34.551 "target": "spare", 00:10:34.551 "progress": { 00:10:34.551 "blocks": 20480, 00:10:34.551 "percent": 31 00:10:34.551 } 00:10:34.551 }, 00:10:34.551 "base_bdevs_list": [ 00:10:34.551 { 00:10:34.551 "name": "spare", 00:10:34.551 "uuid": "16f1abd8-7d19-5bee-94f1-06692c8ec7c9", 00:10:34.551 "is_configured": true, 00:10:34.551 "data_offset": 0, 00:10:34.551 "data_size": 65536 00:10:34.551 }, 00:10:34.551 { 00:10:34.551 "name": "BaseBdev2", 00:10:34.551 "uuid": "84257b9a-96f1-5916-9217-d94b60ab31bc", 00:10:34.551 "is_configured": true, 00:10:34.551 "data_offset": 0, 00:10:34.551 "data_size": 65536 00:10:34.551 } 00:10:34.551 ] 00:10:34.551 }' 00:10:34.551 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:34.551 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=286 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:34.811 "name": "raid_bdev1", 00:10:34.811 "uuid": "6060773a-f5ac-4e92-b32a-c23e46bbdcf2", 00:10:34.811 "strip_size_kb": 0, 00:10:34.811 "state": "online", 00:10:34.811 "raid_level": "raid1", 00:10:34.811 "superblock": false, 00:10:34.811 "num_base_bdevs": 2, 00:10:34.811 "num_base_bdevs_discovered": 2, 00:10:34.811 "num_base_bdevs_operational": 2, 00:10:34.811 "process": { 00:10:34.811 "type": "rebuild", 00:10:34.811 "target": "spare", 00:10:34.811 "progress": { 00:10:34.811 "blocks": 22528, 00:10:34.811 "percent": 34 00:10:34.811 } 00:10:34.811 }, 00:10:34.811 "base_bdevs_list": [ 00:10:34.811 { 00:10:34.811 "name": "spare", 00:10:34.811 "uuid": "16f1abd8-7d19-5bee-94f1-06692c8ec7c9", 00:10:34.811 "is_configured": true, 00:10:34.811 "data_offset": 0, 00:10:34.811 "data_size": 65536 00:10:34.811 }, 00:10:34.811 { 00:10:34.811 "name": "BaseBdev2", 00:10:34.811 "uuid": "84257b9a-96f1-5916-9217-d94b60ab31bc", 00:10:34.811 "is_configured": true, 00:10:34.811 "data_offset": 0, 00:10:34.811 "data_size": 65536 00:10:34.811 } 00:10:34.811 ] 00:10:34.811 }' 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:34.811 21:42:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:35.750 21:42:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:35.750 21:42:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:35.750 21:42:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:35.750 21:42:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:35.750 21:42:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:35.750 21:42:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:35.750 21:42:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.750 21:42:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.750 21:42:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:35.750 21:42:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.010 21:42:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.010 21:42:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:36.010 "name": "raid_bdev1", 00:10:36.010 "uuid": "6060773a-f5ac-4e92-b32a-c23e46bbdcf2", 00:10:36.010 "strip_size_kb": 0, 00:10:36.010 "state": "online", 00:10:36.010 "raid_level": "raid1", 00:10:36.010 "superblock": false, 00:10:36.010 "num_base_bdevs": 2, 00:10:36.010 "num_base_bdevs_discovered": 2, 00:10:36.010 "num_base_bdevs_operational": 2, 00:10:36.010 "process": { 00:10:36.010 "type": "rebuild", 00:10:36.010 "target": "spare", 00:10:36.010 "progress": { 00:10:36.010 "blocks": 45056, 00:10:36.010 "percent": 68 00:10:36.010 } 00:10:36.010 }, 00:10:36.010 "base_bdevs_list": [ 00:10:36.010 { 00:10:36.010 "name": "spare", 00:10:36.010 "uuid": "16f1abd8-7d19-5bee-94f1-06692c8ec7c9", 00:10:36.010 "is_configured": true, 00:10:36.010 "data_offset": 0, 00:10:36.010 "data_size": 65536 00:10:36.010 }, 00:10:36.010 { 00:10:36.010 "name": "BaseBdev2", 00:10:36.010 "uuid": "84257b9a-96f1-5916-9217-d94b60ab31bc", 00:10:36.010 "is_configured": true, 00:10:36.010 "data_offset": 0, 00:10:36.010 "data_size": 65536 00:10:36.010 } 00:10:36.010 ] 00:10:36.010 }' 00:10:36.010 21:42:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:36.010 21:42:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:36.010 21:42:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:36.010 21:42:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:36.010 21:42:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:36.951 [2024-11-27 21:42:59.779142] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:36.951 [2024-11-27 21:42:59.779277] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:36.951 [2024-11-27 21:42:59.779320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.951 21:42:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:36.951 21:42:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:36.951 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:36.951 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:36.951 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:36.951 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:36.951 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.951 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.952 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:36.952 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.952 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.952 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:36.952 "name": "raid_bdev1", 00:10:36.952 "uuid": "6060773a-f5ac-4e92-b32a-c23e46bbdcf2", 00:10:36.952 "strip_size_kb": 0, 00:10:36.952 "state": "online", 00:10:36.952 "raid_level": "raid1", 00:10:36.952 "superblock": false, 00:10:36.952 "num_base_bdevs": 2, 00:10:36.952 "num_base_bdevs_discovered": 2, 00:10:36.952 "num_base_bdevs_operational": 2, 00:10:36.952 "base_bdevs_list": [ 00:10:36.952 { 00:10:36.952 "name": "spare", 00:10:36.952 "uuid": "16f1abd8-7d19-5bee-94f1-06692c8ec7c9", 00:10:36.952 "is_configured": true, 00:10:36.952 "data_offset": 0, 00:10:36.952 "data_size": 65536 00:10:36.952 }, 00:10:36.952 { 00:10:36.952 "name": "BaseBdev2", 00:10:36.952 "uuid": "84257b9a-96f1-5916-9217-d94b60ab31bc", 00:10:36.952 "is_configured": true, 00:10:36.952 "data_offset": 0, 00:10:36.952 "data_size": 65536 00:10:36.952 } 00:10:36.952 ] 00:10:36.952 }' 00:10:36.952 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:37.211 "name": "raid_bdev1", 00:10:37.211 "uuid": "6060773a-f5ac-4e92-b32a-c23e46bbdcf2", 00:10:37.211 "strip_size_kb": 0, 00:10:37.211 "state": "online", 00:10:37.211 "raid_level": "raid1", 00:10:37.211 "superblock": false, 00:10:37.211 "num_base_bdevs": 2, 00:10:37.211 "num_base_bdevs_discovered": 2, 00:10:37.211 "num_base_bdevs_operational": 2, 00:10:37.211 "base_bdevs_list": [ 00:10:37.211 { 00:10:37.211 "name": "spare", 00:10:37.211 "uuid": "16f1abd8-7d19-5bee-94f1-06692c8ec7c9", 00:10:37.211 "is_configured": true, 00:10:37.211 "data_offset": 0, 00:10:37.211 "data_size": 65536 00:10:37.211 }, 00:10:37.211 { 00:10:37.211 "name": "BaseBdev2", 00:10:37.211 "uuid": "84257b9a-96f1-5916-9217-d94b60ab31bc", 00:10:37.211 "is_configured": true, 00:10:37.211 "data_offset": 0, 00:10:37.211 "data_size": 65536 00:10:37.211 } 00:10:37.211 ] 00:10:37.211 }' 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.211 "name": "raid_bdev1", 00:10:37.211 "uuid": "6060773a-f5ac-4e92-b32a-c23e46bbdcf2", 00:10:37.211 "strip_size_kb": 0, 00:10:37.211 "state": "online", 00:10:37.211 "raid_level": "raid1", 00:10:37.211 "superblock": false, 00:10:37.211 "num_base_bdevs": 2, 00:10:37.211 "num_base_bdevs_discovered": 2, 00:10:37.211 "num_base_bdevs_operational": 2, 00:10:37.211 "base_bdevs_list": [ 00:10:37.211 { 00:10:37.211 "name": "spare", 00:10:37.211 "uuid": "16f1abd8-7d19-5bee-94f1-06692c8ec7c9", 00:10:37.211 "is_configured": true, 00:10:37.211 "data_offset": 0, 00:10:37.211 "data_size": 65536 00:10:37.211 }, 00:10:37.211 { 00:10:37.211 "name": "BaseBdev2", 00:10:37.211 "uuid": "84257b9a-96f1-5916-9217-d94b60ab31bc", 00:10:37.211 "is_configured": true, 00:10:37.211 "data_offset": 0, 00:10:37.211 "data_size": 65536 00:10:37.211 } 00:10:37.211 ] 00:10:37.211 }' 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.211 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.779 [2024-11-27 21:43:00.738307] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:37.779 [2024-11-27 21:43:00.738372] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.779 [2024-11-27 21:43:00.738482] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.779 [2024-11-27 21:43:00.738599] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:37.779 [2024-11-27 21:43:00.738667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:37.779 21:43:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:38.039 /dev/nbd0 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.039 1+0 records in 00:10:38.039 1+0 records out 00:10:38.039 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540386 s, 7.6 MB/s 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:38.039 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:38.299 /dev/nbd1 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.299 1+0 records in 00:10:38.299 1+0 records out 00:10:38.299 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375961 s, 10.9 MB/s 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.299 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:38.559 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:38.559 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:38.559 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:38.559 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:38.559 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:38.559 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:38.559 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:38.559 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:38.559 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:38.559 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 85725 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 85725 ']' 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 85725 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85725 00:10:38.819 killing process with pid 85725 00:10:38.819 Received shutdown signal, test time was about 60.000000 seconds 00:10:38.819 00:10:38.819 Latency(us) 00:10:38.819 [2024-11-27T21:43:01.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:38.819 [2024-11-27T21:43:01.940Z] =================================================================================================================== 00:10:38.819 [2024-11-27T21:43:01.940Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85725' 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 85725 00:10:38.819 [2024-11-27 21:43:01.865482] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:38.819 21:43:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 85725 00:10:38.819 [2024-11-27 21:43:01.895282] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:39.079 21:43:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:10:39.079 00:10:39.079 real 0m13.628s 00:10:39.079 user 0m15.906s 00:10:39.079 sys 0m2.733s 00:10:39.079 21:43:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:39.079 21:43:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.079 ************************************ 00:10:39.079 END TEST raid_rebuild_test 00:10:39.079 ************************************ 00:10:39.079 21:43:02 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:10:39.079 21:43:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:39.079 21:43:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.079 21:43:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:39.079 ************************************ 00:10:39.079 START TEST raid_rebuild_test_sb 00:10:39.079 ************************************ 00:10:39.079 21:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:10:39.079 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:10:39.079 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:10:39.079 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:10:39.079 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:10:39.079 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:10:39.079 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:10:39.079 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:39.079 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86125 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86125 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86125 ']' 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.080 21:43:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.341 I/O size of 3145728 is greater than zero copy threshold (65536). 00:10:39.341 Zero copy mechanism will not be used. 00:10:39.341 [2024-11-27 21:43:02.263524] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:10:39.341 [2024-11-27 21:43:02.263659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86125 ] 00:10:39.341 [2024-11-27 21:43:02.417108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.341 [2024-11-27 21:43:02.442149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.601 [2024-11-27 21:43:02.484049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:39.601 [2024-11-27 21:43:02.484092] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:40.171 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.171 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:40.171 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:40.171 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:40.171 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.172 BaseBdev1_malloc 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.172 [2024-11-27 21:43:03.103164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:40.172 [2024-11-27 21:43:03.103266] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.172 [2024-11-27 21:43:03.103328] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:10:40.172 [2024-11-27 21:43:03.103359] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.172 [2024-11-27 21:43:03.105553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.172 [2024-11-27 21:43:03.105624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:40.172 BaseBdev1 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.172 BaseBdev2_malloc 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.172 [2024-11-27 21:43:03.131588] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:10:40.172 [2024-11-27 21:43:03.131643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.172 [2024-11-27 21:43:03.131664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:40.172 [2024-11-27 21:43:03.131673] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.172 [2024-11-27 21:43:03.133801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.172 [2024-11-27 21:43:03.133847] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:40.172 BaseBdev2 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.172 spare_malloc 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.172 spare_delay 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.172 [2024-11-27 21:43:03.171947] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:40.172 [2024-11-27 21:43:03.171991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:40.172 [2024-11-27 21:43:03.172009] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:40.172 [2024-11-27 21:43:03.172018] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:40.172 [2024-11-27 21:43:03.174128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:40.172 [2024-11-27 21:43:03.174160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:40.172 spare 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.172 [2024-11-27 21:43:03.183969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.172 [2024-11-27 21:43:03.185836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.172 [2024-11-27 21:43:03.186034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:10:40.172 [2024-11-27 21:43:03.186069] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:40.172 [2024-11-27 21:43:03.186380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:10:40.172 [2024-11-27 21:43:03.186558] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:10:40.172 [2024-11-27 21:43:03.186604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:10:40.172 [2024-11-27 21:43:03.186788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.172 "name": "raid_bdev1", 00:10:40.172 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:40.172 "strip_size_kb": 0, 00:10:40.172 "state": "online", 00:10:40.172 "raid_level": "raid1", 00:10:40.172 "superblock": true, 00:10:40.172 "num_base_bdevs": 2, 00:10:40.172 "num_base_bdevs_discovered": 2, 00:10:40.172 "num_base_bdevs_operational": 2, 00:10:40.172 "base_bdevs_list": [ 00:10:40.172 { 00:10:40.172 "name": "BaseBdev1", 00:10:40.172 "uuid": "2621af38-5e8f-53e7-b5e5-7c7ad1925842", 00:10:40.172 "is_configured": true, 00:10:40.172 "data_offset": 2048, 00:10:40.172 "data_size": 63488 00:10:40.172 }, 00:10:40.172 { 00:10:40.172 "name": "BaseBdev2", 00:10:40.172 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:40.172 "is_configured": true, 00:10:40.172 "data_offset": 2048, 00:10:40.172 "data_size": 63488 00:10:40.172 } 00:10:40.172 ] 00:10:40.172 }' 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.172 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.742 [2024-11-27 21:43:03.627474] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:40.742 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:10:41.002 [2024-11-27 21:43:03.894835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:10:41.002 /dev/nbd0 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:41.002 1+0 records in 00:10:41.002 1+0 records out 00:10:41.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000564614 s, 7.3 MB/s 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:10:41.002 21:43:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:10:45.202 63488+0 records in 00:10:45.202 63488+0 records out 00:10:45.202 32505856 bytes (33 MB, 31 MiB) copied, 3.62559 s, 9.0 MB/s 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:45.202 [2024-11-27 21:43:07.827058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.202 [2024-11-27 21:43:07.847124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.202 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.202 "name": "raid_bdev1", 00:10:45.202 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:45.202 "strip_size_kb": 0, 00:10:45.202 "state": "online", 00:10:45.202 "raid_level": "raid1", 00:10:45.202 "superblock": true, 00:10:45.202 "num_base_bdevs": 2, 00:10:45.202 "num_base_bdevs_discovered": 1, 00:10:45.202 "num_base_bdevs_operational": 1, 00:10:45.202 "base_bdevs_list": [ 00:10:45.202 { 00:10:45.202 "name": null, 00:10:45.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.203 "is_configured": false, 00:10:45.203 "data_offset": 0, 00:10:45.203 "data_size": 63488 00:10:45.203 }, 00:10:45.203 { 00:10:45.203 "name": "BaseBdev2", 00:10:45.203 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:45.203 "is_configured": true, 00:10:45.203 "data_offset": 2048, 00:10:45.203 "data_size": 63488 00:10:45.203 } 00:10:45.203 ] 00:10:45.203 }' 00:10:45.203 21:43:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.203 21:43:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.203 21:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:45.203 21:43:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.203 21:43:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.203 [2024-11-27 21:43:08.282421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:45.203 [2024-11-27 21:43:08.287377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e280 00:10:45.203 21:43:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.203 21:43:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:10:45.203 [2024-11-27 21:43:08.289546] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:46.583 "name": "raid_bdev1", 00:10:46.583 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:46.583 "strip_size_kb": 0, 00:10:46.583 "state": "online", 00:10:46.583 "raid_level": "raid1", 00:10:46.583 "superblock": true, 00:10:46.583 "num_base_bdevs": 2, 00:10:46.583 "num_base_bdevs_discovered": 2, 00:10:46.583 "num_base_bdevs_operational": 2, 00:10:46.583 "process": { 00:10:46.583 "type": "rebuild", 00:10:46.583 "target": "spare", 00:10:46.583 "progress": { 00:10:46.583 "blocks": 20480, 00:10:46.583 "percent": 32 00:10:46.583 } 00:10:46.583 }, 00:10:46.583 "base_bdevs_list": [ 00:10:46.583 { 00:10:46.583 "name": "spare", 00:10:46.583 "uuid": "f595ae82-9c66-5a3d-bc3a-11776f19bf19", 00:10:46.583 "is_configured": true, 00:10:46.583 "data_offset": 2048, 00:10:46.583 "data_size": 63488 00:10:46.583 }, 00:10:46.583 { 00:10:46.583 "name": "BaseBdev2", 00:10:46.583 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:46.583 "is_configured": true, 00:10:46.583 "data_offset": 2048, 00:10:46.583 "data_size": 63488 00:10:46.583 } 00:10:46.583 ] 00:10:46.583 }' 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 [2024-11-27 21:43:09.449548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:46.583 [2024-11-27 21:43:09.494506] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:46.583 [2024-11-27 21:43:09.494558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.583 [2024-11-27 21:43:09.494592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:46.583 [2024-11-27 21:43:09.494599] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.583 "name": "raid_bdev1", 00:10:46.583 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:46.583 "strip_size_kb": 0, 00:10:46.583 "state": "online", 00:10:46.583 "raid_level": "raid1", 00:10:46.583 "superblock": true, 00:10:46.583 "num_base_bdevs": 2, 00:10:46.583 "num_base_bdevs_discovered": 1, 00:10:46.583 "num_base_bdevs_operational": 1, 00:10:46.583 "base_bdevs_list": [ 00:10:46.583 { 00:10:46.583 "name": null, 00:10:46.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.583 "is_configured": false, 00:10:46.583 "data_offset": 0, 00:10:46.583 "data_size": 63488 00:10:46.583 }, 00:10:46.583 { 00:10:46.583 "name": "BaseBdev2", 00:10:46.583 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:46.583 "is_configured": true, 00:10:46.583 "data_offset": 2048, 00:10:46.583 "data_size": 63488 00:10:46.583 } 00:10:46.583 ] 00:10:46.583 }' 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.583 21:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.153 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:47.153 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:47.153 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:47.153 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:47.153 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:47.153 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.154 21:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.154 21:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.154 21:43:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.154 21:43:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.154 21:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:47.154 "name": "raid_bdev1", 00:10:47.154 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:47.154 "strip_size_kb": 0, 00:10:47.154 "state": "online", 00:10:47.154 "raid_level": "raid1", 00:10:47.154 "superblock": true, 00:10:47.154 "num_base_bdevs": 2, 00:10:47.154 "num_base_bdevs_discovered": 1, 00:10:47.154 "num_base_bdevs_operational": 1, 00:10:47.154 "base_bdevs_list": [ 00:10:47.154 { 00:10:47.154 "name": null, 00:10:47.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.154 "is_configured": false, 00:10:47.154 "data_offset": 0, 00:10:47.154 "data_size": 63488 00:10:47.154 }, 00:10:47.154 { 00:10:47.154 "name": "BaseBdev2", 00:10:47.154 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:47.154 "is_configured": true, 00:10:47.154 "data_offset": 2048, 00:10:47.154 "data_size": 63488 00:10:47.154 } 00:10:47.154 ] 00:10:47.154 }' 00:10:47.154 21:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:47.154 21:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:47.154 21:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:47.154 21:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:47.154 21:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:47.154 21:43:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.154 21:43:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.154 [2024-11-27 21:43:10.122584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:47.154 [2024-11-27 21:43:10.127473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e350 00:10:47.154 21:43:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.154 21:43:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:10:47.154 [2024-11-27 21:43:10.129494] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:48.093 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:48.093 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:48.093 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:48.093 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:48.093 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:48.093 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.093 21:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.093 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.093 21:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.093 21:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.093 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:48.093 "name": "raid_bdev1", 00:10:48.093 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:48.093 "strip_size_kb": 0, 00:10:48.093 "state": "online", 00:10:48.093 "raid_level": "raid1", 00:10:48.093 "superblock": true, 00:10:48.093 "num_base_bdevs": 2, 00:10:48.093 "num_base_bdevs_discovered": 2, 00:10:48.093 "num_base_bdevs_operational": 2, 00:10:48.093 "process": { 00:10:48.093 "type": "rebuild", 00:10:48.093 "target": "spare", 00:10:48.093 "progress": { 00:10:48.093 "blocks": 20480, 00:10:48.093 "percent": 32 00:10:48.093 } 00:10:48.093 }, 00:10:48.093 "base_bdevs_list": [ 00:10:48.093 { 00:10:48.093 "name": "spare", 00:10:48.093 "uuid": "f595ae82-9c66-5a3d-bc3a-11776f19bf19", 00:10:48.093 "is_configured": true, 00:10:48.093 "data_offset": 2048, 00:10:48.093 "data_size": 63488 00:10:48.093 }, 00:10:48.093 { 00:10:48.093 "name": "BaseBdev2", 00:10:48.093 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:48.093 "is_configured": true, 00:10:48.093 "data_offset": 2048, 00:10:48.093 "data_size": 63488 00:10:48.093 } 00:10:48.093 ] 00:10:48.093 }' 00:10:48.093 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:10:48.352 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=300 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:48.352 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:48.353 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.353 21:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.353 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.353 21:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.353 21:43:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.353 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:48.353 "name": "raid_bdev1", 00:10:48.353 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:48.353 "strip_size_kb": 0, 00:10:48.353 "state": "online", 00:10:48.353 "raid_level": "raid1", 00:10:48.353 "superblock": true, 00:10:48.353 "num_base_bdevs": 2, 00:10:48.353 "num_base_bdevs_discovered": 2, 00:10:48.353 "num_base_bdevs_operational": 2, 00:10:48.353 "process": { 00:10:48.353 "type": "rebuild", 00:10:48.353 "target": "spare", 00:10:48.353 "progress": { 00:10:48.353 "blocks": 22528, 00:10:48.353 "percent": 35 00:10:48.353 } 00:10:48.353 }, 00:10:48.353 "base_bdevs_list": [ 00:10:48.353 { 00:10:48.353 "name": "spare", 00:10:48.353 "uuid": "f595ae82-9c66-5a3d-bc3a-11776f19bf19", 00:10:48.353 "is_configured": true, 00:10:48.353 "data_offset": 2048, 00:10:48.353 "data_size": 63488 00:10:48.353 }, 00:10:48.353 { 00:10:48.353 "name": "BaseBdev2", 00:10:48.353 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:48.353 "is_configured": true, 00:10:48.353 "data_offset": 2048, 00:10:48.353 "data_size": 63488 00:10:48.353 } 00:10:48.353 ] 00:10:48.353 }' 00:10:48.353 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:48.353 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:48.353 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:48.353 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:48.353 21:43:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:49.732 "name": "raid_bdev1", 00:10:49.732 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:49.732 "strip_size_kb": 0, 00:10:49.732 "state": "online", 00:10:49.732 "raid_level": "raid1", 00:10:49.732 "superblock": true, 00:10:49.732 "num_base_bdevs": 2, 00:10:49.732 "num_base_bdevs_discovered": 2, 00:10:49.732 "num_base_bdevs_operational": 2, 00:10:49.732 "process": { 00:10:49.732 "type": "rebuild", 00:10:49.732 "target": "spare", 00:10:49.732 "progress": { 00:10:49.732 "blocks": 45056, 00:10:49.732 "percent": 70 00:10:49.732 } 00:10:49.732 }, 00:10:49.732 "base_bdevs_list": [ 00:10:49.732 { 00:10:49.732 "name": "spare", 00:10:49.732 "uuid": "f595ae82-9c66-5a3d-bc3a-11776f19bf19", 00:10:49.732 "is_configured": true, 00:10:49.732 "data_offset": 2048, 00:10:49.732 "data_size": 63488 00:10:49.732 }, 00:10:49.732 { 00:10:49.732 "name": "BaseBdev2", 00:10:49.732 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:49.732 "is_configured": true, 00:10:49.732 "data_offset": 2048, 00:10:49.732 "data_size": 63488 00:10:49.732 } 00:10:49.732 ] 00:10:49.732 }' 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:49.732 21:43:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:10:50.301 [2024-11-27 21:43:13.240436] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:10:50.301 [2024-11-27 21:43:13.240634] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:10:50.301 [2024-11-27 21:43:13.240848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:50.561 "name": "raid_bdev1", 00:10:50.561 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:50.561 "strip_size_kb": 0, 00:10:50.561 "state": "online", 00:10:50.561 "raid_level": "raid1", 00:10:50.561 "superblock": true, 00:10:50.561 "num_base_bdevs": 2, 00:10:50.561 "num_base_bdevs_discovered": 2, 00:10:50.561 "num_base_bdevs_operational": 2, 00:10:50.561 "base_bdevs_list": [ 00:10:50.561 { 00:10:50.561 "name": "spare", 00:10:50.561 "uuid": "f595ae82-9c66-5a3d-bc3a-11776f19bf19", 00:10:50.561 "is_configured": true, 00:10:50.561 "data_offset": 2048, 00:10:50.561 "data_size": 63488 00:10:50.561 }, 00:10:50.561 { 00:10:50.561 "name": "BaseBdev2", 00:10:50.561 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:50.561 "is_configured": true, 00:10:50.561 "data_offset": 2048, 00:10:50.561 "data_size": 63488 00:10:50.561 } 00:10:50.561 ] 00:10:50.561 }' 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:10:50.561 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:50.821 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:10:50.821 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:10:50.821 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:50.821 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:50.821 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:50.821 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:50.821 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:50.821 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.821 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:50.822 "name": "raid_bdev1", 00:10:50.822 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:50.822 "strip_size_kb": 0, 00:10:50.822 "state": "online", 00:10:50.822 "raid_level": "raid1", 00:10:50.822 "superblock": true, 00:10:50.822 "num_base_bdevs": 2, 00:10:50.822 "num_base_bdevs_discovered": 2, 00:10:50.822 "num_base_bdevs_operational": 2, 00:10:50.822 "base_bdevs_list": [ 00:10:50.822 { 00:10:50.822 "name": "spare", 00:10:50.822 "uuid": "f595ae82-9c66-5a3d-bc3a-11776f19bf19", 00:10:50.822 "is_configured": true, 00:10:50.822 "data_offset": 2048, 00:10:50.822 "data_size": 63488 00:10:50.822 }, 00:10:50.822 { 00:10:50.822 "name": "BaseBdev2", 00:10:50.822 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:50.822 "is_configured": true, 00:10:50.822 "data_offset": 2048, 00:10:50.822 "data_size": 63488 00:10:50.822 } 00:10:50.822 ] 00:10:50.822 }' 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.822 "name": "raid_bdev1", 00:10:50.822 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:50.822 "strip_size_kb": 0, 00:10:50.822 "state": "online", 00:10:50.822 "raid_level": "raid1", 00:10:50.822 "superblock": true, 00:10:50.822 "num_base_bdevs": 2, 00:10:50.822 "num_base_bdevs_discovered": 2, 00:10:50.822 "num_base_bdevs_operational": 2, 00:10:50.822 "base_bdevs_list": [ 00:10:50.822 { 00:10:50.822 "name": "spare", 00:10:50.822 "uuid": "f595ae82-9c66-5a3d-bc3a-11776f19bf19", 00:10:50.822 "is_configured": true, 00:10:50.822 "data_offset": 2048, 00:10:50.822 "data_size": 63488 00:10:50.822 }, 00:10:50.822 { 00:10:50.822 "name": "BaseBdev2", 00:10:50.822 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:50.822 "is_configured": true, 00:10:50.822 "data_offset": 2048, 00:10:50.822 "data_size": 63488 00:10:50.822 } 00:10:50.822 ] 00:10:50.822 }' 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.822 21:43:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.392 [2024-11-27 21:43:14.260198] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:51.392 [2024-11-27 21:43:14.260270] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.392 [2024-11-27 21:43:14.260384] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.392 [2024-11-27 21:43:14.260520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:51.392 [2024-11-27 21:43:14.260573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:51.392 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:10:51.392 /dev/nbd0 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:51.652 1+0 records in 00:10:51.652 1+0 records out 00:10:51.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345701 s, 11.8 MB/s 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:10:51.652 /dev/nbd1 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:51.652 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:51.912 1+0 records in 00:10:51.912 1+0 records out 00:10:51.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384423 s, 10.7 MB/s 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:51.912 21:43:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.172 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.172 [2024-11-27 21:43:15.276779] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:52.172 [2024-11-27 21:43:15.276886] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.173 [2024-11-27 21:43:15.276922] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:52.173 [2024-11-27 21:43:15.276959] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.173 [2024-11-27 21:43:15.279108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.173 [2024-11-27 21:43:15.279179] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:52.173 [2024-11-27 21:43:15.279296] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:52.173 [2024-11-27 21:43:15.279376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:52.173 [2024-11-27 21:43:15.279560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.173 spare 00:10:52.173 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.173 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:10:52.173 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.173 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.432 [2024-11-27 21:43:15.379496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:10:52.432 [2024-11-27 21:43:15.379551] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:52.432 [2024-11-27 21:43:15.379904] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cae960 00:10:52.432 [2024-11-27 21:43:15.380106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:10:52.432 [2024-11-27 21:43:15.380156] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:10:52.432 [2024-11-27 21:43:15.380361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.432 "name": "raid_bdev1", 00:10:52.432 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:52.432 "strip_size_kb": 0, 00:10:52.432 "state": "online", 00:10:52.432 "raid_level": "raid1", 00:10:52.432 "superblock": true, 00:10:52.432 "num_base_bdevs": 2, 00:10:52.432 "num_base_bdevs_discovered": 2, 00:10:52.432 "num_base_bdevs_operational": 2, 00:10:52.432 "base_bdevs_list": [ 00:10:52.432 { 00:10:52.432 "name": "spare", 00:10:52.432 "uuid": "f595ae82-9c66-5a3d-bc3a-11776f19bf19", 00:10:52.432 "is_configured": true, 00:10:52.432 "data_offset": 2048, 00:10:52.432 "data_size": 63488 00:10:52.432 }, 00:10:52.432 { 00:10:52.432 "name": "BaseBdev2", 00:10:52.432 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:52.432 "is_configured": true, 00:10:52.432 "data_offset": 2048, 00:10:52.432 "data_size": 63488 00:10:52.432 } 00:10:52.432 ] 00:10:52.432 }' 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.432 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.691 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:52.691 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:52.691 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:52.691 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:52.691 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:52.691 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.691 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.691 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.691 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.950 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.950 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:52.951 "name": "raid_bdev1", 00:10:52.951 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:52.951 "strip_size_kb": 0, 00:10:52.951 "state": "online", 00:10:52.951 "raid_level": "raid1", 00:10:52.951 "superblock": true, 00:10:52.951 "num_base_bdevs": 2, 00:10:52.951 "num_base_bdevs_discovered": 2, 00:10:52.951 "num_base_bdevs_operational": 2, 00:10:52.951 "base_bdevs_list": [ 00:10:52.951 { 00:10:52.951 "name": "spare", 00:10:52.951 "uuid": "f595ae82-9c66-5a3d-bc3a-11776f19bf19", 00:10:52.951 "is_configured": true, 00:10:52.951 "data_offset": 2048, 00:10:52.951 "data_size": 63488 00:10:52.951 }, 00:10:52.951 { 00:10:52.951 "name": "BaseBdev2", 00:10:52.951 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:52.951 "is_configured": true, 00:10:52.951 "data_offset": 2048, 00:10:52.951 "data_size": 63488 00:10:52.951 } 00:10:52.951 ] 00:10:52.951 }' 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.951 [2024-11-27 21:43:15.979725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.951 21:43:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.951 21:43:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.951 21:43:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.951 "name": "raid_bdev1", 00:10:52.951 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:52.951 "strip_size_kb": 0, 00:10:52.951 "state": "online", 00:10:52.951 "raid_level": "raid1", 00:10:52.951 "superblock": true, 00:10:52.951 "num_base_bdevs": 2, 00:10:52.951 "num_base_bdevs_discovered": 1, 00:10:52.951 "num_base_bdevs_operational": 1, 00:10:52.951 "base_bdevs_list": [ 00:10:52.951 { 00:10:52.951 "name": null, 00:10:52.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:52.951 "is_configured": false, 00:10:52.951 "data_offset": 0, 00:10:52.951 "data_size": 63488 00:10:52.951 }, 00:10:52.951 { 00:10:52.951 "name": "BaseBdev2", 00:10:52.951 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:52.951 "is_configured": true, 00:10:52.951 "data_offset": 2048, 00:10:52.951 "data_size": 63488 00:10:52.951 } 00:10:52.951 ] 00:10:52.951 }' 00:10:52.951 21:43:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.951 21:43:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.558 21:43:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:10:53.558 21:43:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.558 21:43:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.558 [2024-11-27 21:43:16.454942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:53.558 [2024-11-27 21:43:16.455222] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:53.558 [2024-11-27 21:43:16.455284] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:53.558 [2024-11-27 21:43:16.455377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:53.558 [2024-11-27 21:43:16.460091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caea30 00:10:53.558 21:43:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.558 21:43:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:10:53.558 [2024-11-27 21:43:16.462031] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:54.496 "name": "raid_bdev1", 00:10:54.496 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:54.496 "strip_size_kb": 0, 00:10:54.496 "state": "online", 00:10:54.496 "raid_level": "raid1", 00:10:54.496 "superblock": true, 00:10:54.496 "num_base_bdevs": 2, 00:10:54.496 "num_base_bdevs_discovered": 2, 00:10:54.496 "num_base_bdevs_operational": 2, 00:10:54.496 "process": { 00:10:54.496 "type": "rebuild", 00:10:54.496 "target": "spare", 00:10:54.496 "progress": { 00:10:54.496 "blocks": 20480, 00:10:54.496 "percent": 32 00:10:54.496 } 00:10:54.496 }, 00:10:54.496 "base_bdevs_list": [ 00:10:54.496 { 00:10:54.496 "name": "spare", 00:10:54.496 "uuid": "f595ae82-9c66-5a3d-bc3a-11776f19bf19", 00:10:54.496 "is_configured": true, 00:10:54.496 "data_offset": 2048, 00:10:54.496 "data_size": 63488 00:10:54.496 }, 00:10:54.496 { 00:10:54.496 "name": "BaseBdev2", 00:10:54.496 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:54.496 "is_configured": true, 00:10:54.496 "data_offset": 2048, 00:10:54.496 "data_size": 63488 00:10:54.496 } 00:10:54.496 ] 00:10:54.496 }' 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.496 21:43:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.755 [2024-11-27 21:43:17.618238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:54.755 [2024-11-27 21:43:17.666060] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:54.755 [2024-11-27 21:43:17.666108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.755 [2024-11-27 21:43:17.666124] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:54.755 [2024-11-27 21:43:17.666130] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:54.755 21:43:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.755 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:54.755 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.755 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.755 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:54.755 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:54.755 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:54.755 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.755 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.756 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.756 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.756 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.756 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.756 21:43:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.756 21:43:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.756 21:43:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.756 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.756 "name": "raid_bdev1", 00:10:54.756 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:54.756 "strip_size_kb": 0, 00:10:54.756 "state": "online", 00:10:54.756 "raid_level": "raid1", 00:10:54.756 "superblock": true, 00:10:54.756 "num_base_bdevs": 2, 00:10:54.756 "num_base_bdevs_discovered": 1, 00:10:54.756 "num_base_bdevs_operational": 1, 00:10:54.756 "base_bdevs_list": [ 00:10:54.756 { 00:10:54.756 "name": null, 00:10:54.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:54.756 "is_configured": false, 00:10:54.756 "data_offset": 0, 00:10:54.756 "data_size": 63488 00:10:54.756 }, 00:10:54.756 { 00:10:54.756 "name": "BaseBdev2", 00:10:54.756 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:54.756 "is_configured": true, 00:10:54.756 "data_offset": 2048, 00:10:54.756 "data_size": 63488 00:10:54.756 } 00:10:54.756 ] 00:10:54.756 }' 00:10:54.756 21:43:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.756 21:43:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.016 21:43:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:10:55.016 21:43:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.016 21:43:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.016 [2024-11-27 21:43:18.113929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:10:55.016 [2024-11-27 21:43:18.114041] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.016 [2024-11-27 21:43:18.114080] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:55.016 [2024-11-27 21:43:18.114107] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.016 [2024-11-27 21:43:18.114586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.016 [2024-11-27 21:43:18.114643] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:10:55.016 [2024-11-27 21:43:18.114791] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:10:55.016 [2024-11-27 21:43:18.114843] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:10:55.016 [2024-11-27 21:43:18.114912] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:10:55.016 [2024-11-27 21:43:18.114962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:10:55.016 [2024-11-27 21:43:18.119688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:10:55.016 spare 00:10:55.016 21:43:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.016 21:43:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:10:55.016 [2024-11-27 21:43:18.121673] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:56.397 "name": "raid_bdev1", 00:10:56.397 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:56.397 "strip_size_kb": 0, 00:10:56.397 "state": "online", 00:10:56.397 "raid_level": "raid1", 00:10:56.397 "superblock": true, 00:10:56.397 "num_base_bdevs": 2, 00:10:56.397 "num_base_bdevs_discovered": 2, 00:10:56.397 "num_base_bdevs_operational": 2, 00:10:56.397 "process": { 00:10:56.397 "type": "rebuild", 00:10:56.397 "target": "spare", 00:10:56.397 "progress": { 00:10:56.397 "blocks": 20480, 00:10:56.397 "percent": 32 00:10:56.397 } 00:10:56.397 }, 00:10:56.397 "base_bdevs_list": [ 00:10:56.397 { 00:10:56.397 "name": "spare", 00:10:56.397 "uuid": "f595ae82-9c66-5a3d-bc3a-11776f19bf19", 00:10:56.397 "is_configured": true, 00:10:56.397 "data_offset": 2048, 00:10:56.397 "data_size": 63488 00:10:56.397 }, 00:10:56.397 { 00:10:56.397 "name": "BaseBdev2", 00:10:56.397 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:56.397 "is_configured": true, 00:10:56.397 "data_offset": 2048, 00:10:56.397 "data_size": 63488 00:10:56.397 } 00:10:56.397 ] 00:10:56.397 }' 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.397 [2024-11-27 21:43:19.261934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:56.397 [2024-11-27 21:43:19.325732] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:10:56.397 [2024-11-27 21:43:19.325851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.397 [2024-11-27 21:43:19.325894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:10:56.397 [2024-11-27 21:43:19.325948] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.397 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.397 "name": "raid_bdev1", 00:10:56.397 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:56.397 "strip_size_kb": 0, 00:10:56.397 "state": "online", 00:10:56.397 "raid_level": "raid1", 00:10:56.397 "superblock": true, 00:10:56.397 "num_base_bdevs": 2, 00:10:56.397 "num_base_bdevs_discovered": 1, 00:10:56.397 "num_base_bdevs_operational": 1, 00:10:56.397 "base_bdevs_list": [ 00:10:56.397 { 00:10:56.397 "name": null, 00:10:56.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.397 "is_configured": false, 00:10:56.397 "data_offset": 0, 00:10:56.397 "data_size": 63488 00:10:56.398 }, 00:10:56.398 { 00:10:56.398 "name": "BaseBdev2", 00:10:56.398 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:56.398 "is_configured": true, 00:10:56.398 "data_offset": 2048, 00:10:56.398 "data_size": 63488 00:10:56.398 } 00:10:56.398 ] 00:10:56.398 }' 00:10:56.398 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.398 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:56.966 "name": "raid_bdev1", 00:10:56.966 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:56.966 "strip_size_kb": 0, 00:10:56.966 "state": "online", 00:10:56.966 "raid_level": "raid1", 00:10:56.966 "superblock": true, 00:10:56.966 "num_base_bdevs": 2, 00:10:56.966 "num_base_bdevs_discovered": 1, 00:10:56.966 "num_base_bdevs_operational": 1, 00:10:56.966 "base_bdevs_list": [ 00:10:56.966 { 00:10:56.966 "name": null, 00:10:56.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:56.966 "is_configured": false, 00:10:56.966 "data_offset": 0, 00:10:56.966 "data_size": 63488 00:10:56.966 }, 00:10:56.966 { 00:10:56.966 "name": "BaseBdev2", 00:10:56.966 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:56.966 "is_configured": true, 00:10:56.966 "data_offset": 2048, 00:10:56.966 "data_size": 63488 00:10:56.966 } 00:10:56.966 ] 00:10:56.966 }' 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:56.966 [2024-11-27 21:43:19.933641] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:10:56.966 [2024-11-27 21:43:19.933734] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.966 [2024-11-27 21:43:19.933771] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:56.966 [2024-11-27 21:43:19.933808] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.966 [2024-11-27 21:43:19.934272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.966 [2024-11-27 21:43:19.934332] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:56.966 [2024-11-27 21:43:19.934444] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:10:56.966 [2024-11-27 21:43:19.934493] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:10:56.966 [2024-11-27 21:43:19.934528] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:56.966 [2024-11-27 21:43:19.934544] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:10:56.966 BaseBdev1 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.966 21:43:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.904 "name": "raid_bdev1", 00:10:57.904 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:57.904 "strip_size_kb": 0, 00:10:57.904 "state": "online", 00:10:57.904 "raid_level": "raid1", 00:10:57.904 "superblock": true, 00:10:57.904 "num_base_bdevs": 2, 00:10:57.904 "num_base_bdevs_discovered": 1, 00:10:57.904 "num_base_bdevs_operational": 1, 00:10:57.904 "base_bdevs_list": [ 00:10:57.904 { 00:10:57.904 "name": null, 00:10:57.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:57.904 "is_configured": false, 00:10:57.904 "data_offset": 0, 00:10:57.904 "data_size": 63488 00:10:57.904 }, 00:10:57.904 { 00:10:57.904 "name": "BaseBdev2", 00:10:57.904 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:57.904 "is_configured": true, 00:10:57.904 "data_offset": 2048, 00:10:57.904 "data_size": 63488 00:10:57.904 } 00:10:57.904 ] 00:10:57.904 }' 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.904 21:43:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.472 21:43:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:10:58.472 21:43:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:10:58.472 21:43:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:10:58.472 21:43:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:10:58.472 21:43:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:10:58.472 21:43:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.472 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.472 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.472 21:43:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.472 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.472 21:43:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:10:58.472 "name": "raid_bdev1", 00:10:58.472 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:58.472 "strip_size_kb": 0, 00:10:58.472 "state": "online", 00:10:58.472 "raid_level": "raid1", 00:10:58.472 "superblock": true, 00:10:58.472 "num_base_bdevs": 2, 00:10:58.472 "num_base_bdevs_discovered": 1, 00:10:58.473 "num_base_bdevs_operational": 1, 00:10:58.473 "base_bdevs_list": [ 00:10:58.473 { 00:10:58.473 "name": null, 00:10:58.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.473 "is_configured": false, 00:10:58.473 "data_offset": 0, 00:10:58.473 "data_size": 63488 00:10:58.473 }, 00:10:58.473 { 00:10:58.473 "name": "BaseBdev2", 00:10:58.473 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:58.473 "is_configured": true, 00:10:58.473 "data_offset": 2048, 00:10:58.473 "data_size": 63488 00:10:58.473 } 00:10:58.473 ] 00:10:58.473 }' 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:58.473 [2024-11-27 21:43:21.538987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:58.473 [2024-11-27 21:43:21.539194] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:10:58.473 [2024-11-27 21:43:21.539251] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:10:58.473 request: 00:10:58.473 { 00:10:58.473 "base_bdev": "BaseBdev1", 00:10:58.473 "raid_bdev": "raid_bdev1", 00:10:58.473 "method": "bdev_raid_add_base_bdev", 00:10:58.473 "req_id": 1 00:10:58.473 } 00:10:58.473 Got JSON-RPC error response 00:10:58.473 response: 00:10:58.473 { 00:10:58.473 "code": -22, 00:10:58.473 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:10:58.473 } 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:58.473 21:43:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.852 "name": "raid_bdev1", 00:10:59.852 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:10:59.852 "strip_size_kb": 0, 00:10:59.852 "state": "online", 00:10:59.852 "raid_level": "raid1", 00:10:59.852 "superblock": true, 00:10:59.852 "num_base_bdevs": 2, 00:10:59.852 "num_base_bdevs_discovered": 1, 00:10:59.852 "num_base_bdevs_operational": 1, 00:10:59.852 "base_bdevs_list": [ 00:10:59.852 { 00:10:59.852 "name": null, 00:10:59.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.852 "is_configured": false, 00:10:59.852 "data_offset": 0, 00:10:59.852 "data_size": 63488 00:10:59.852 }, 00:10:59.852 { 00:10:59.852 "name": "BaseBdev2", 00:10:59.852 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:10:59.852 "is_configured": true, 00:10:59.852 "data_offset": 2048, 00:10:59.852 "data_size": 63488 00:10:59.852 } 00:10:59.852 ] 00:10:59.852 }' 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.852 21:43:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:00.112 "name": "raid_bdev1", 00:11:00.112 "uuid": "915e7245-dca0-42ee-b007-8b5a51e42161", 00:11:00.112 "strip_size_kb": 0, 00:11:00.112 "state": "online", 00:11:00.112 "raid_level": "raid1", 00:11:00.112 "superblock": true, 00:11:00.112 "num_base_bdevs": 2, 00:11:00.112 "num_base_bdevs_discovered": 1, 00:11:00.112 "num_base_bdevs_operational": 1, 00:11:00.112 "base_bdevs_list": [ 00:11:00.112 { 00:11:00.112 "name": null, 00:11:00.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:00.112 "is_configured": false, 00:11:00.112 "data_offset": 0, 00:11:00.112 "data_size": 63488 00:11:00.112 }, 00:11:00.112 { 00:11:00.112 "name": "BaseBdev2", 00:11:00.112 "uuid": "17aca4a3-1128-5984-9cd0-cba6e888ec49", 00:11:00.112 "is_configured": true, 00:11:00.112 "data_offset": 2048, 00:11:00.112 "data_size": 63488 00:11:00.112 } 00:11:00.112 ] 00:11:00.112 }' 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86125 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86125 ']' 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 86125 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86125 00:11:00.112 killing process with pid 86125 00:11:00.112 Received shutdown signal, test time was about 60.000000 seconds 00:11:00.112 00:11:00.112 Latency(us) 00:11:00.112 [2024-11-27T21:43:23.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:00.112 [2024-11-27T21:43:23.233Z] =================================================================================================================== 00:11:00.112 [2024-11-27T21:43:23.233Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86125' 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 86125 00:11:00.112 [2024-11-27 21:43:23.170697] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.112 [2024-11-27 21:43:23.170831] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.112 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 86125 00:11:00.112 [2024-11-27 21:43:23.170885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.112 [2024-11-27 21:43:23.170895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:11:00.112 [2024-11-27 21:43:23.201615] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.371 21:43:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:11:00.371 00:11:00.371 real 0m21.237s 00:11:00.371 user 0m26.602s 00:11:00.371 sys 0m3.447s 00:11:00.371 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.371 ************************************ 00:11:00.371 END TEST raid_rebuild_test_sb 00:11:00.371 ************************************ 00:11:00.371 21:43:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:00.371 21:43:23 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:11:00.372 21:43:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:00.372 21:43:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.372 21:43:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.372 ************************************ 00:11:00.372 START TEST raid_rebuild_test_io 00:11:00.372 ************************************ 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:00.372 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:00.630 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=86837 00:11:00.630 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:00.630 21:43:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 86837 00:11:00.630 21:43:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 86837 ']' 00:11:00.630 21:43:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.630 21:43:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.630 21:43:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.630 21:43:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.630 21:43:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:00.630 [2024-11-27 21:43:23.573502] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:11:00.630 [2024-11-27 21:43:23.573707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86837 ] 00:11:00.630 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:00.630 Zero copy mechanism will not be used. 00:11:00.630 [2024-11-27 21:43:23.727588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.889 [2024-11-27 21:43:23.752615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.889 [2024-11-27 21:43:23.794146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:00.889 [2024-11-27 21:43:23.794252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.523 BaseBdev1_malloc 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.523 [2024-11-27 21:43:24.412782] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:01.523 [2024-11-27 21:43:24.412854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.523 [2024-11-27 21:43:24.412881] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:01.523 [2024-11-27 21:43:24.412893] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.523 [2024-11-27 21:43:24.415077] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.523 [2024-11-27 21:43:24.415114] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:01.523 BaseBdev1 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.523 BaseBdev2_malloc 00:11:01.523 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.524 [2024-11-27 21:43:24.441208] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:01.524 [2024-11-27 21:43:24.441302] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.524 [2024-11-27 21:43:24.441344] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:01.524 [2024-11-27 21:43:24.441373] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.524 [2024-11-27 21:43:24.443438] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.524 [2024-11-27 21:43:24.443507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:01.524 BaseBdev2 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.524 spare_malloc 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.524 spare_delay 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.524 [2024-11-27 21:43:24.481513] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:01.524 [2024-11-27 21:43:24.481610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.524 [2024-11-27 21:43:24.481646] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:01.524 [2024-11-27 21:43:24.481672] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.524 [2024-11-27 21:43:24.483943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.524 [2024-11-27 21:43:24.484011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:01.524 spare 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.524 [2024-11-27 21:43:24.493527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.524 [2024-11-27 21:43:24.495367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.524 [2024-11-27 21:43:24.495507] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:01.524 [2024-11-27 21:43:24.495536] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:01.524 [2024-11-27 21:43:24.495854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:01.524 [2024-11-27 21:43:24.496029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:01.524 [2024-11-27 21:43:24.496085] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:01.524 [2024-11-27 21:43:24.496273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.524 "name": "raid_bdev1", 00:11:01.524 "uuid": "45e871a0-ae9c-4da4-b8a1-c17df402a022", 00:11:01.524 "strip_size_kb": 0, 00:11:01.524 "state": "online", 00:11:01.524 "raid_level": "raid1", 00:11:01.524 "superblock": false, 00:11:01.524 "num_base_bdevs": 2, 00:11:01.524 "num_base_bdevs_discovered": 2, 00:11:01.524 "num_base_bdevs_operational": 2, 00:11:01.524 "base_bdevs_list": [ 00:11:01.524 { 00:11:01.524 "name": "BaseBdev1", 00:11:01.524 "uuid": "9f2e5007-6d5f-5aa8-b140-3691ad714293", 00:11:01.524 "is_configured": true, 00:11:01.524 "data_offset": 0, 00:11:01.524 "data_size": 65536 00:11:01.524 }, 00:11:01.524 { 00:11:01.524 "name": "BaseBdev2", 00:11:01.524 "uuid": "7910bac5-a4f3-5242-bd31-a211379f1507", 00:11:01.524 "is_configured": true, 00:11:01.524 "data_offset": 0, 00:11:01.524 "data_size": 65536 00:11:01.524 } 00:11:01.524 ] 00:11:01.524 }' 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.524 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.091 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:02.091 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:02.091 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.091 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.091 [2024-11-27 21:43:24.921066] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.091 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.091 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:02.091 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.092 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.092 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.092 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:02.092 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.092 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:02.092 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:02.092 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:02.092 21:43:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:02.092 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.092 21:43:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.092 [2024-11-27 21:43:24.996668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.092 "name": "raid_bdev1", 00:11:02.092 "uuid": "45e871a0-ae9c-4da4-b8a1-c17df402a022", 00:11:02.092 "strip_size_kb": 0, 00:11:02.092 "state": "online", 00:11:02.092 "raid_level": "raid1", 00:11:02.092 "superblock": false, 00:11:02.092 "num_base_bdevs": 2, 00:11:02.092 "num_base_bdevs_discovered": 1, 00:11:02.092 "num_base_bdevs_operational": 1, 00:11:02.092 "base_bdevs_list": [ 00:11:02.092 { 00:11:02.092 "name": null, 00:11:02.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.092 "is_configured": false, 00:11:02.092 "data_offset": 0, 00:11:02.092 "data_size": 65536 00:11:02.092 }, 00:11:02.092 { 00:11:02.092 "name": "BaseBdev2", 00:11:02.092 "uuid": "7910bac5-a4f3-5242-bd31-a211379f1507", 00:11:02.092 "is_configured": true, 00:11:02.092 "data_offset": 0, 00:11:02.092 "data_size": 65536 00:11:02.092 } 00:11:02.092 ] 00:11:02.092 }' 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.092 21:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.092 [2024-11-27 21:43:25.096014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:02.092 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:02.092 Zero copy mechanism will not be used. 00:11:02.092 Running I/O for 60 seconds... 00:11:02.352 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:02.352 21:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.352 21:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:02.352 [2024-11-27 21:43:25.446846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:02.352 21:43:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.352 21:43:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:02.612 [2024-11-27 21:43:25.499429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:02.612 [2024-11-27 21:43:25.501425] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:02.612 [2024-11-27 21:43:25.614111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:02.612 [2024-11-27 21:43:25.614623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:02.872 [2024-11-27 21:43:25.817872] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:02.872 [2024-11-27 21:43:25.818260] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:03.132 182.00 IOPS, 546.00 MiB/s [2024-11-27T21:43:26.253Z] [2024-11-27 21:43:26.159290] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:03.392 [2024-11-27 21:43:26.288756] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:03.392 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:03.392 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:03.392 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:03.392 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:03.392 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:03.392 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.392 21:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.392 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.392 21:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:03.392 21:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.652 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:03.652 "name": "raid_bdev1", 00:11:03.652 "uuid": "45e871a0-ae9c-4da4-b8a1-c17df402a022", 00:11:03.652 "strip_size_kb": 0, 00:11:03.652 "state": "online", 00:11:03.652 "raid_level": "raid1", 00:11:03.652 "superblock": false, 00:11:03.652 "num_base_bdevs": 2, 00:11:03.652 "num_base_bdevs_discovered": 2, 00:11:03.652 "num_base_bdevs_operational": 2, 00:11:03.652 "process": { 00:11:03.652 "type": "rebuild", 00:11:03.652 "target": "spare", 00:11:03.652 "progress": { 00:11:03.652 "blocks": 12288, 00:11:03.652 "percent": 18 00:11:03.652 } 00:11:03.652 }, 00:11:03.652 "base_bdevs_list": [ 00:11:03.652 { 00:11:03.652 "name": "spare", 00:11:03.652 "uuid": "165a3438-c524-5072-ad75-65beb7501268", 00:11:03.652 "is_configured": true, 00:11:03.652 "data_offset": 0, 00:11:03.652 "data_size": 65536 00:11:03.652 }, 00:11:03.652 { 00:11:03.652 "name": "BaseBdev2", 00:11:03.652 "uuid": "7910bac5-a4f3-5242-bd31-a211379f1507", 00:11:03.652 "is_configured": true, 00:11:03.652 "data_offset": 0, 00:11:03.652 "data_size": 65536 00:11:03.652 } 00:11:03.652 ] 00:11:03.652 }' 00:11:03.652 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:03.652 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:03.652 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:03.652 [2024-11-27 21:43:26.622605] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:03.652 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:03.652 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:03.652 21:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.652 21:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:03.652 [2024-11-27 21:43:26.634173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:03.652 [2024-11-27 21:43:26.748538] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:03.652 [2024-11-27 21:43:26.750296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.652 [2024-11-27 21:43:26.750361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:03.652 [2024-11-27 21:43:26.750403] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:03.913 [2024-11-27 21:43:26.773388] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.913 "name": "raid_bdev1", 00:11:03.913 "uuid": "45e871a0-ae9c-4da4-b8a1-c17df402a022", 00:11:03.913 "strip_size_kb": 0, 00:11:03.913 "state": "online", 00:11:03.913 "raid_level": "raid1", 00:11:03.913 "superblock": false, 00:11:03.913 "num_base_bdevs": 2, 00:11:03.913 "num_base_bdevs_discovered": 1, 00:11:03.913 "num_base_bdevs_operational": 1, 00:11:03.913 "base_bdevs_list": [ 00:11:03.913 { 00:11:03.913 "name": null, 00:11:03.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.913 "is_configured": false, 00:11:03.913 "data_offset": 0, 00:11:03.913 "data_size": 65536 00:11:03.913 }, 00:11:03.913 { 00:11:03.913 "name": "BaseBdev2", 00:11:03.913 "uuid": "7910bac5-a4f3-5242-bd31-a211379f1507", 00:11:03.913 "is_configured": true, 00:11:03.913 "data_offset": 0, 00:11:03.913 "data_size": 65536 00:11:03.913 } 00:11:03.913 ] 00:11:03.913 }' 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.913 21:43:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.173 177.50 IOPS, 532.50 MiB/s [2024-11-27T21:43:27.294Z] 21:43:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:04.173 21:43:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:04.173 21:43:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:04.173 21:43:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:04.173 21:43:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:04.173 21:43:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.173 21:43:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.173 21:43:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.173 21:43:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.173 21:43:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.173 21:43:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:04.173 "name": "raid_bdev1", 00:11:04.173 "uuid": "45e871a0-ae9c-4da4-b8a1-c17df402a022", 00:11:04.173 "strip_size_kb": 0, 00:11:04.173 "state": "online", 00:11:04.173 "raid_level": "raid1", 00:11:04.173 "superblock": false, 00:11:04.173 "num_base_bdevs": 2, 00:11:04.173 "num_base_bdevs_discovered": 1, 00:11:04.173 "num_base_bdevs_operational": 1, 00:11:04.173 "base_bdevs_list": [ 00:11:04.173 { 00:11:04.173 "name": null, 00:11:04.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.173 "is_configured": false, 00:11:04.173 "data_offset": 0, 00:11:04.173 "data_size": 65536 00:11:04.173 }, 00:11:04.173 { 00:11:04.173 "name": "BaseBdev2", 00:11:04.173 "uuid": "7910bac5-a4f3-5242-bd31-a211379f1507", 00:11:04.173 "is_configured": true, 00:11:04.173 "data_offset": 0, 00:11:04.173 "data_size": 65536 00:11:04.173 } 00:11:04.173 ] 00:11:04.173 }' 00:11:04.173 21:43:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:04.173 21:43:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:04.173 21:43:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:04.432 21:43:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:04.432 21:43:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:04.432 21:43:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.433 21:43:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:04.433 [2024-11-27 21:43:27.303030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:04.433 21:43:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.433 21:43:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:04.433 [2024-11-27 21:43:27.345244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:04.433 [2024-11-27 21:43:27.347193] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:04.433 [2024-11-27 21:43:27.454562] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:04.433 [2024-11-27 21:43:27.455137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:04.693 [2024-11-27 21:43:27.674642] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:04.693 [2024-11-27 21:43:27.675037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:04.952 [2024-11-27 21:43:28.007137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:04.952 [2024-11-27 21:43:28.007548] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:05.212 178.00 IOPS, 534.00 MiB/s [2024-11-27T21:43:28.333Z] [2024-11-27 21:43:28.132163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:05.212 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:05.212 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:05.212 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:05.212 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:05.212 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:05.212 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.212 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.212 21:43:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.212 21:43:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:05.472 "name": "raid_bdev1", 00:11:05.472 "uuid": "45e871a0-ae9c-4da4-b8a1-c17df402a022", 00:11:05.472 "strip_size_kb": 0, 00:11:05.472 "state": "online", 00:11:05.472 "raid_level": "raid1", 00:11:05.472 "superblock": false, 00:11:05.472 "num_base_bdevs": 2, 00:11:05.472 "num_base_bdevs_discovered": 2, 00:11:05.472 "num_base_bdevs_operational": 2, 00:11:05.472 "process": { 00:11:05.472 "type": "rebuild", 00:11:05.472 "target": "spare", 00:11:05.472 "progress": { 00:11:05.472 "blocks": 10240, 00:11:05.472 "percent": 15 00:11:05.472 } 00:11:05.472 }, 00:11:05.472 "base_bdevs_list": [ 00:11:05.472 { 00:11:05.472 "name": "spare", 00:11:05.472 "uuid": "165a3438-c524-5072-ad75-65beb7501268", 00:11:05.472 "is_configured": true, 00:11:05.472 "data_offset": 0, 00:11:05.472 "data_size": 65536 00:11:05.472 }, 00:11:05.472 { 00:11:05.472 "name": "BaseBdev2", 00:11:05.472 "uuid": "7910bac5-a4f3-5242-bd31-a211379f1507", 00:11:05.472 "is_configured": true, 00:11:05.472 "data_offset": 0, 00:11:05.472 "data_size": 65536 00:11:05.472 } 00:11:05.472 ] 00:11:05.472 }' 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=317 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:05.472 [2024-11-27 21:43:28.464653] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:05.472 "name": "raid_bdev1", 00:11:05.472 "uuid": "45e871a0-ae9c-4da4-b8a1-c17df402a022", 00:11:05.472 "strip_size_kb": 0, 00:11:05.472 "state": "online", 00:11:05.472 "raid_level": "raid1", 00:11:05.472 "superblock": false, 00:11:05.472 "num_base_bdevs": 2, 00:11:05.472 "num_base_bdevs_discovered": 2, 00:11:05.472 "num_base_bdevs_operational": 2, 00:11:05.472 "process": { 00:11:05.472 "type": "rebuild", 00:11:05.472 "target": "spare", 00:11:05.472 "progress": { 00:11:05.472 "blocks": 14336, 00:11:05.472 "percent": 21 00:11:05.472 } 00:11:05.472 }, 00:11:05.472 "base_bdevs_list": [ 00:11:05.472 { 00:11:05.472 "name": "spare", 00:11:05.472 "uuid": "165a3438-c524-5072-ad75-65beb7501268", 00:11:05.472 "is_configured": true, 00:11:05.472 "data_offset": 0, 00:11:05.472 "data_size": 65536 00:11:05.472 }, 00:11:05.472 { 00:11:05.472 "name": "BaseBdev2", 00:11:05.472 "uuid": "7910bac5-a4f3-5242-bd31-a211379f1507", 00:11:05.472 "is_configured": true, 00:11:05.472 "data_offset": 0, 00:11:05.472 "data_size": 65536 00:11:05.472 } 00:11:05.472 ] 00:11:05.472 }' 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:05.472 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:05.732 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:05.732 21:43:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:05.732 [2024-11-27 21:43:28.679174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:05.992 [2024-11-27 21:43:29.011589] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:11:06.251 146.25 IOPS, 438.75 MiB/s [2024-11-27T21:43:29.372Z] [2024-11-27 21:43:29.131755] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:06.510 [2024-11-27 21:43:29.457700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:11:06.510 21:43:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:06.510 21:43:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:06.510 21:43:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:06.510 21:43:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:06.510 21:43:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:06.510 21:43:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:06.510 21:43:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.510 21:43:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.510 21:43:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.510 21:43:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:06.510 21:43:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.769 21:43:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:06.769 "name": "raid_bdev1", 00:11:06.769 "uuid": "45e871a0-ae9c-4da4-b8a1-c17df402a022", 00:11:06.769 "strip_size_kb": 0, 00:11:06.769 "state": "online", 00:11:06.769 "raid_level": "raid1", 00:11:06.769 "superblock": false, 00:11:06.769 "num_base_bdevs": 2, 00:11:06.769 "num_base_bdevs_discovered": 2, 00:11:06.769 "num_base_bdevs_operational": 2, 00:11:06.769 "process": { 00:11:06.769 "type": "rebuild", 00:11:06.769 "target": "spare", 00:11:06.769 "progress": { 00:11:06.769 "blocks": 28672, 00:11:06.769 "percent": 43 00:11:06.769 } 00:11:06.769 }, 00:11:06.769 "base_bdevs_list": [ 00:11:06.769 { 00:11:06.769 "name": "spare", 00:11:06.769 "uuid": "165a3438-c524-5072-ad75-65beb7501268", 00:11:06.769 "is_configured": true, 00:11:06.769 "data_offset": 0, 00:11:06.769 "data_size": 65536 00:11:06.769 }, 00:11:06.769 { 00:11:06.769 "name": "BaseBdev2", 00:11:06.769 "uuid": "7910bac5-a4f3-5242-bd31-a211379f1507", 00:11:06.769 "is_configured": true, 00:11:06.769 "data_offset": 0, 00:11:06.769 "data_size": 65536 00:11:06.769 } 00:11:06.769 ] 00:11:06.769 }' 00:11:06.769 21:43:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:06.769 21:43:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:06.769 21:43:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:06.769 21:43:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:06.769 21:43:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:07.288 129.80 IOPS, 389.40 MiB/s [2024-11-27T21:43:30.409Z] [2024-11-27 21:43:30.209012] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:07.288 [2024-11-27 21:43:30.209313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:11:07.547 [2024-11-27 21:43:30.446315] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:11:07.547 [2024-11-27 21:43:30.661190] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:07.806 "name": "raid_bdev1", 00:11:07.806 "uuid": "45e871a0-ae9c-4da4-b8a1-c17df402a022", 00:11:07.806 "strip_size_kb": 0, 00:11:07.806 "state": "online", 00:11:07.806 "raid_level": "raid1", 00:11:07.806 "superblock": false, 00:11:07.806 "num_base_bdevs": 2, 00:11:07.806 "num_base_bdevs_discovered": 2, 00:11:07.806 "num_base_bdevs_operational": 2, 00:11:07.806 "process": { 00:11:07.806 "type": "rebuild", 00:11:07.806 "target": "spare", 00:11:07.806 "progress": { 00:11:07.806 "blocks": 47104, 00:11:07.806 "percent": 71 00:11:07.806 } 00:11:07.806 }, 00:11:07.806 "base_bdevs_list": [ 00:11:07.806 { 00:11:07.806 "name": "spare", 00:11:07.806 "uuid": "165a3438-c524-5072-ad75-65beb7501268", 00:11:07.806 "is_configured": true, 00:11:07.806 "data_offset": 0, 00:11:07.806 "data_size": 65536 00:11:07.806 }, 00:11:07.806 { 00:11:07.806 "name": "BaseBdev2", 00:11:07.806 "uuid": "7910bac5-a4f3-5242-bd31-a211379f1507", 00:11:07.806 "is_configured": true, 00:11:07.806 "data_offset": 0, 00:11:07.806 "data_size": 65536 00:11:07.806 } 00:11:07.806 ] 00:11:07.806 }' 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:07.806 21:43:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:08.635 114.67 IOPS, 344.00 MiB/s [2024-11-27T21:43:31.756Z] [2024-11-27 21:43:31.626489] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:08.635 [2024-11-27 21:43:31.731543] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:08.635 [2024-11-27 21:43:31.733438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:08.895 "name": "raid_bdev1", 00:11:08.895 "uuid": "45e871a0-ae9c-4da4-b8a1-c17df402a022", 00:11:08.895 "strip_size_kb": 0, 00:11:08.895 "state": "online", 00:11:08.895 "raid_level": "raid1", 00:11:08.895 "superblock": false, 00:11:08.895 "num_base_bdevs": 2, 00:11:08.895 "num_base_bdevs_discovered": 2, 00:11:08.895 "num_base_bdevs_operational": 2, 00:11:08.895 "base_bdevs_list": [ 00:11:08.895 { 00:11:08.895 "name": "spare", 00:11:08.895 "uuid": "165a3438-c524-5072-ad75-65beb7501268", 00:11:08.895 "is_configured": true, 00:11:08.895 "data_offset": 0, 00:11:08.895 "data_size": 65536 00:11:08.895 }, 00:11:08.895 { 00:11:08.895 "name": "BaseBdev2", 00:11:08.895 "uuid": "7910bac5-a4f3-5242-bd31-a211379f1507", 00:11:08.895 "is_configured": true, 00:11:08.895 "data_offset": 0, 00:11:08.895 "data_size": 65536 00:11:08.895 } 00:11:08.895 ] 00:11:08.895 }' 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:08.895 21:43:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:08.895 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:08.895 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:11:08.895 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:08.895 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:08.895 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:08.895 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:08.895 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:09.155 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.155 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.155 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.155 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.155 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.155 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:09.155 "name": "raid_bdev1", 00:11:09.155 "uuid": "45e871a0-ae9c-4da4-b8a1-c17df402a022", 00:11:09.155 "strip_size_kb": 0, 00:11:09.155 "state": "online", 00:11:09.155 "raid_level": "raid1", 00:11:09.155 "superblock": false, 00:11:09.155 "num_base_bdevs": 2, 00:11:09.155 "num_base_bdevs_discovered": 2, 00:11:09.155 "num_base_bdevs_operational": 2, 00:11:09.155 "base_bdevs_list": [ 00:11:09.155 { 00:11:09.155 "name": "spare", 00:11:09.155 "uuid": "165a3438-c524-5072-ad75-65beb7501268", 00:11:09.155 "is_configured": true, 00:11:09.155 "data_offset": 0, 00:11:09.155 "data_size": 65536 00:11:09.155 }, 00:11:09.155 { 00:11:09.155 "name": "BaseBdev2", 00:11:09.155 "uuid": "7910bac5-a4f3-5242-bd31-a211379f1507", 00:11:09.156 "is_configured": true, 00:11:09.156 "data_offset": 0, 00:11:09.156 "data_size": 65536 00:11:09.156 } 00:11:09.156 ] 00:11:09.156 }' 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:09.156 103.14 IOPS, 309.43 MiB/s [2024-11-27T21:43:32.277Z] 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.156 "name": "raid_bdev1", 00:11:09.156 "uuid": "45e871a0-ae9c-4da4-b8a1-c17df402a022", 00:11:09.156 "strip_size_kb": 0, 00:11:09.156 "state": "online", 00:11:09.156 "raid_level": "raid1", 00:11:09.156 "superblock": false, 00:11:09.156 "num_base_bdevs": 2, 00:11:09.156 "num_base_bdevs_discovered": 2, 00:11:09.156 "num_base_bdevs_operational": 2, 00:11:09.156 "base_bdevs_list": [ 00:11:09.156 { 00:11:09.156 "name": "spare", 00:11:09.156 "uuid": "165a3438-c524-5072-ad75-65beb7501268", 00:11:09.156 "is_configured": true, 00:11:09.156 "data_offset": 0, 00:11:09.156 "data_size": 65536 00:11:09.156 }, 00:11:09.156 { 00:11:09.156 "name": "BaseBdev2", 00:11:09.156 "uuid": "7910bac5-a4f3-5242-bd31-a211379f1507", 00:11:09.156 "is_configured": true, 00:11:09.156 "data_offset": 0, 00:11:09.156 "data_size": 65536 00:11:09.156 } 00:11:09.156 ] 00:11:09.156 }' 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.156 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.728 [2024-11-27 21:43:32.563554] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:09.728 [2024-11-27 21:43:32.563622] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:09.728 00:11:09.728 Latency(us) 00:11:09.728 [2024-11-27T21:43:32.849Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:09.728 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:09.728 raid_bdev1 : 7.51 97.98 293.95 0.00 0.00 13305.80 275.45 113557.58 00:11:09.728 [2024-11-27T21:43:32.849Z] =================================================================================================================== 00:11:09.728 [2024-11-27T21:43:32.849Z] Total : 97.98 293.95 0.00 0.00 13305.80 275.45 113557.58 00:11:09.728 [2024-11-27 21:43:32.598833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:09.728 [2024-11-27 21:43:32.598939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.728 [2024-11-27 21:43:32.599062] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:09.728 [2024-11-27 21:43:32.599120] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:09.728 { 00:11:09.728 "results": [ 00:11:09.728 { 00:11:09.728 "job": "raid_bdev1", 00:11:09.728 "core_mask": "0x1", 00:11:09.728 "workload": "randrw", 00:11:09.728 "percentage": 50, 00:11:09.728 "status": "finished", 00:11:09.728 "queue_depth": 2, 00:11:09.728 "io_size": 3145728, 00:11:09.728 "runtime": 7.511484, 00:11:09.728 "iops": 97.98330130237913, 00:11:09.728 "mibps": 293.9499039071374, 00:11:09.728 "io_failed": 0, 00:11:09.728 "io_timeout": 0, 00:11:09.728 "avg_latency_us": 13305.80170875261, 00:11:09.728 "min_latency_us": 275.45152838427947, 00:11:09.728 "max_latency_us": 113557.57554585153 00:11:09.728 } 00:11:09.728 ], 00:11:09.728 "core_count": 1 00:11:09.728 } 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:09.728 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:10.001 /dev/nbd0 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.001 1+0 records in 00:11:10.001 1+0 records out 00:11:10.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529564 s, 7.7 MB/s 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:10.001 21:43:32 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:10.001 /dev/nbd1 00:11:10.001 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:10.261 1+0 records in 00:11:10.261 1+0 records out 00:11:10.261 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460147 s, 8.9 MB/s 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:10.261 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 86837 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 86837 ']' 00:11:10.520 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 86837 00:11:10.779 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:11:10.779 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.779 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86837 00:11:10.779 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.779 killing process with pid 86837 00:11:10.779 Received shutdown signal, test time was about 8.598176 seconds 00:11:10.779 00:11:10.779 Latency(us) 00:11:10.779 [2024-11-27T21:43:33.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.779 [2024-11-27T21:43:33.900Z] =================================================================================================================== 00:11:10.779 [2024-11-27T21:43:33.900Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:10.780 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.780 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86837' 00:11:10.780 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 86837 00:11:10.780 [2024-11-27 21:43:33.680018] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.780 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 86837 00:11:10.780 [2024-11-27 21:43:33.705721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:11.039 00:11:11.039 real 0m10.426s 00:11:11.039 user 0m13.453s 00:11:11.039 sys 0m1.336s 00:11:11.039 ************************************ 00:11:11.039 END TEST raid_rebuild_test_io 00:11:11.039 ************************************ 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.039 21:43:33 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:11:11.039 21:43:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:11.039 21:43:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.039 21:43:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:11.039 ************************************ 00:11:11.039 START TEST raid_rebuild_test_sb_io 00:11:11.039 ************************************ 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=87194 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 87194 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 87194 ']' 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:11.039 21:43:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.039 [2024-11-27 21:43:34.069802] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:11:11.039 [2024-11-27 21:43:34.070035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87194 ] 00:11:11.039 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:11.039 Zero copy mechanism will not be used. 00:11:11.299 [2024-11-27 21:43:34.203745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.299 [2024-11-27 21:43:34.229133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.299 [2024-11-27 21:43:34.271076] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.299 [2024-11-27 21:43:34.271203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.868 BaseBdev1_malloc 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.868 [2024-11-27 21:43:34.914226] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:11.868 [2024-11-27 21:43:34.914321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.868 [2024-11-27 21:43:34.914379] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:11.868 [2024-11-27 21:43:34.914409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.868 [2024-11-27 21:43:34.916517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.868 [2024-11-27 21:43:34.916587] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:11.868 BaseBdev1 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.868 BaseBdev2_malloc 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.868 [2024-11-27 21:43:34.942640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:11.868 [2024-11-27 21:43:34.942692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.868 [2024-11-27 21:43:34.942714] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:11.868 [2024-11-27 21:43:34.942722] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.868 [2024-11-27 21:43:34.944735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.868 [2024-11-27 21:43:34.944776] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:11.868 BaseBdev2 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.868 spare_malloc 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.868 spare_delay 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.868 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:11.868 [2024-11-27 21:43:34.983007] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:11.868 [2024-11-27 21:43:34.983090] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.868 [2024-11-27 21:43:34.983113] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:11.868 [2024-11-27 21:43:34.983122] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.868 [2024-11-27 21:43:34.985327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.868 [2024-11-27 21:43:34.985360] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:12.128 spare 00:11:12.128 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.128 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:12.128 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.128 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.128 [2024-11-27 21:43:34.995034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.128 [2024-11-27 21:43:34.996895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.128 [2024-11-27 21:43:34.997096] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:12.128 [2024-11-27 21:43:34.997140] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:12.128 [2024-11-27 21:43:34.997461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:11:12.128 [2024-11-27 21:43:34.997655] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:12.128 [2024-11-27 21:43:34.997703] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:12.128 [2024-11-27 21:43:34.997898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.128 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.128 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:12.128 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.129 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.129 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.129 21:43:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.129 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:12.129 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.129 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.129 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.129 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.129 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.129 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.129 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.129 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.129 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.129 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.129 "name": "raid_bdev1", 00:11:12.129 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:12.129 "strip_size_kb": 0, 00:11:12.129 "state": "online", 00:11:12.129 "raid_level": "raid1", 00:11:12.129 "superblock": true, 00:11:12.129 "num_base_bdevs": 2, 00:11:12.129 "num_base_bdevs_discovered": 2, 00:11:12.129 "num_base_bdevs_operational": 2, 00:11:12.129 "base_bdevs_list": [ 00:11:12.129 { 00:11:12.129 "name": "BaseBdev1", 00:11:12.129 "uuid": "99a45aee-d386-5734-8b88-9f2336457924", 00:11:12.129 "is_configured": true, 00:11:12.129 "data_offset": 2048, 00:11:12.129 "data_size": 63488 00:11:12.129 }, 00:11:12.129 { 00:11:12.129 "name": "BaseBdev2", 00:11:12.129 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:12.129 "is_configured": true, 00:11:12.129 "data_offset": 2048, 00:11:12.129 "data_size": 63488 00:11:12.129 } 00:11:12.129 ] 00:11:12.129 }' 00:11:12.129 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.129 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.387 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:12.387 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:12.387 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.387 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.387 [2024-11-27 21:43:35.402576] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.388 [2024-11-27 21:43:35.502180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.388 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.646 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.646 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:12.646 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.646 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.646 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.646 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.646 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.646 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:12.646 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.646 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.646 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.646 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.646 "name": "raid_bdev1", 00:11:12.646 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:12.646 "strip_size_kb": 0, 00:11:12.646 "state": "online", 00:11:12.646 "raid_level": "raid1", 00:11:12.646 "superblock": true, 00:11:12.646 "num_base_bdevs": 2, 00:11:12.646 "num_base_bdevs_discovered": 1, 00:11:12.646 "num_base_bdevs_operational": 1, 00:11:12.646 "base_bdevs_list": [ 00:11:12.646 { 00:11:12.646 "name": null, 00:11:12.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.646 "is_configured": false, 00:11:12.646 "data_offset": 0, 00:11:12.646 "data_size": 63488 00:11:12.646 }, 00:11:12.646 { 00:11:12.646 "name": "BaseBdev2", 00:11:12.646 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:12.646 "is_configured": true, 00:11:12.646 "data_offset": 2048, 00:11:12.646 "data_size": 63488 00:11:12.646 } 00:11:12.646 ] 00:11:12.646 }' 00:11:12.646 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.646 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.646 [2024-11-27 21:43:35.611021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:11:12.646 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:12.646 Zero copy mechanism will not be used. 00:11:12.646 Running I/O for 60 seconds... 00:11:12.907 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:12.907 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.907 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:12.907 [2024-11-27 21:43:35.942366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:12.907 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.907 21:43:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:12.907 [2024-11-27 21:43:35.996314] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:12.907 [2024-11-27 21:43:35.998284] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:13.166 [2024-11-27 21:43:36.111166] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:13.166 [2024-11-27 21:43:36.111647] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:13.426 [2024-11-27 21:43:36.313931] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:13.426 [2024-11-27 21:43:36.314226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:13.686 [2024-11-27 21:43:36.554704] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:13.686 178.00 IOPS, 534.00 MiB/s [2024-11-27T21:43:36.807Z] [2024-11-27 21:43:36.776434] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:13.686 [2024-11-27 21:43:36.776764] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:13.945 21:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:13.945 21:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:13.945 21:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:13.945 21:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:13.945 21:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:13.945 21:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.945 21:43:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.945 21:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.945 21:43:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:13.945 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.945 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:13.945 "name": "raid_bdev1", 00:11:13.945 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:13.945 "strip_size_kb": 0, 00:11:13.945 "state": "online", 00:11:13.945 "raid_level": "raid1", 00:11:13.945 "superblock": true, 00:11:13.945 "num_base_bdevs": 2, 00:11:13.945 "num_base_bdevs_discovered": 2, 00:11:13.945 "num_base_bdevs_operational": 2, 00:11:13.945 "process": { 00:11:13.945 "type": "rebuild", 00:11:13.945 "target": "spare", 00:11:13.945 "progress": { 00:11:13.945 "blocks": 12288, 00:11:13.945 "percent": 19 00:11:13.945 } 00:11:13.945 }, 00:11:13.945 "base_bdevs_list": [ 00:11:13.945 { 00:11:13.945 "name": "spare", 00:11:13.945 "uuid": "17175e5d-80a0-5399-bbc6-01722058854f", 00:11:13.945 "is_configured": true, 00:11:13.945 "data_offset": 2048, 00:11:13.945 "data_size": 63488 00:11:13.945 }, 00:11:13.945 { 00:11:13.945 "name": "BaseBdev2", 00:11:13.945 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:13.945 "is_configured": true, 00:11:13.945 "data_offset": 2048, 00:11:13.945 "data_size": 63488 00:11:13.945 } 00:11:13.945 ] 00:11:13.945 }' 00:11:13.945 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:14.204 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:14.204 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:14.204 [2024-11-27 21:43:37.111784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:14.204 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:14.204 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:14.204 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.204 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.204 [2024-11-27 21:43:37.141689] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:14.204 [2024-11-27 21:43:37.224561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:11:14.464 [2024-11-27 21:43:37.330974] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:14.464 [2024-11-27 21:43:37.333661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:14.464 [2024-11-27 21:43:37.333745] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:14.464 [2024-11-27 21:43:37.333777] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:14.464 [2024-11-27 21:43:37.361750] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000026d0 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.464 "name": "raid_bdev1", 00:11:14.464 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:14.464 "strip_size_kb": 0, 00:11:14.464 "state": "online", 00:11:14.464 "raid_level": "raid1", 00:11:14.464 "superblock": true, 00:11:14.464 "num_base_bdevs": 2, 00:11:14.464 "num_base_bdevs_discovered": 1, 00:11:14.464 "num_base_bdevs_operational": 1, 00:11:14.464 "base_bdevs_list": [ 00:11:14.464 { 00:11:14.464 "name": null, 00:11:14.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.464 "is_configured": false, 00:11:14.464 "data_offset": 0, 00:11:14.464 "data_size": 63488 00:11:14.464 }, 00:11:14.464 { 00:11:14.464 "name": "BaseBdev2", 00:11:14.464 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:14.464 "is_configured": true, 00:11:14.464 "data_offset": 2048, 00:11:14.464 "data_size": 63488 00:11:14.464 } 00:11:14.464 ] 00:11:14.464 }' 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.464 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.723 144.00 IOPS, 432.00 MiB/s [2024-11-27T21:43:37.844Z] 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:14.723 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:14.723 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:14.723 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:14.723 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:14.723 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:14.723 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.723 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.723 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.982 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.982 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:14.982 "name": "raid_bdev1", 00:11:14.982 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:14.982 "strip_size_kb": 0, 00:11:14.982 "state": "online", 00:11:14.982 "raid_level": "raid1", 00:11:14.982 "superblock": true, 00:11:14.982 "num_base_bdevs": 2, 00:11:14.982 "num_base_bdevs_discovered": 1, 00:11:14.982 "num_base_bdevs_operational": 1, 00:11:14.982 "base_bdevs_list": [ 00:11:14.982 { 00:11:14.982 "name": null, 00:11:14.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.982 "is_configured": false, 00:11:14.982 "data_offset": 0, 00:11:14.982 "data_size": 63488 00:11:14.982 }, 00:11:14.982 { 00:11:14.982 "name": "BaseBdev2", 00:11:14.982 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:14.982 "is_configured": true, 00:11:14.982 "data_offset": 2048, 00:11:14.982 "data_size": 63488 00:11:14.982 } 00:11:14.982 ] 00:11:14.982 }' 00:11:14.982 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:14.982 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:14.982 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:14.982 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:14.982 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:14.982 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.982 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:14.982 [2024-11-27 21:43:37.962530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:14.982 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.982 21:43:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:14.982 [2024-11-27 21:43:38.000464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:11:14.982 [2024-11-27 21:43:38.002414] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:15.242 [2024-11-27 21:43:38.119946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:15.242 [2024-11-27 21:43:38.120415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:11:15.242 [2024-11-27 21:43:38.346171] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:15.242 [2024-11-27 21:43:38.346434] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:11:15.760 165.33 IOPS, 496.00 MiB/s [2024-11-27T21:43:38.881Z] [2024-11-27 21:43:38.675485] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:15.760 [2024-11-27 21:43:38.676007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:11:16.020 [2024-11-27 21:43:38.895763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:16.021 [2024-11-27 21:43:38.896164] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:11:16.021 21:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:16.021 21:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.021 21:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:16.021 21:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:16.021 21:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.021 21:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.021 21:43:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.021 "name": "raid_bdev1", 00:11:16.021 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:16.021 "strip_size_kb": 0, 00:11:16.021 "state": "online", 00:11:16.021 "raid_level": "raid1", 00:11:16.021 "superblock": true, 00:11:16.021 "num_base_bdevs": 2, 00:11:16.021 "num_base_bdevs_discovered": 2, 00:11:16.021 "num_base_bdevs_operational": 2, 00:11:16.021 "process": { 00:11:16.021 "type": "rebuild", 00:11:16.021 "target": "spare", 00:11:16.021 "progress": { 00:11:16.021 "blocks": 10240, 00:11:16.021 "percent": 16 00:11:16.021 } 00:11:16.021 }, 00:11:16.021 "base_bdevs_list": [ 00:11:16.021 { 00:11:16.021 "name": "spare", 00:11:16.021 "uuid": "17175e5d-80a0-5399-bbc6-01722058854f", 00:11:16.021 "is_configured": true, 00:11:16.021 "data_offset": 2048, 00:11:16.021 "data_size": 63488 00:11:16.021 }, 00:11:16.021 { 00:11:16.021 "name": "BaseBdev2", 00:11:16.021 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:16.021 "is_configured": true, 00:11:16.021 "data_offset": 2048, 00:11:16.021 "data_size": 63488 00:11:16.021 } 00:11:16.021 ] 00:11:16.021 }' 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:16.021 [2024-11-27 21:43:39.136335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 1 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:16.021 2288 offset_end: 18432 00:11:16.021 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=328 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:16.021 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:16.286 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.286 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.286 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.286 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:16.286 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.286 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:16.286 "name": "raid_bdev1", 00:11:16.286 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:16.286 "strip_size_kb": 0, 00:11:16.286 "state": "online", 00:11:16.286 "raid_level": "raid1", 00:11:16.286 "superblock": true, 00:11:16.286 "num_base_bdevs": 2, 00:11:16.286 "num_base_bdevs_discovered": 2, 00:11:16.286 "num_base_bdevs_operational": 2, 00:11:16.286 "process": { 00:11:16.286 "type": "rebuild", 00:11:16.286 "target": "spare", 00:11:16.286 "progress": { 00:11:16.286 "blocks": 14336, 00:11:16.286 "percent": 22 00:11:16.286 } 00:11:16.286 }, 00:11:16.286 "base_bdevs_list": [ 00:11:16.286 { 00:11:16.286 "name": "spare", 00:11:16.286 "uuid": "17175e5d-80a0-5399-bbc6-01722058854f", 00:11:16.286 "is_configured": true, 00:11:16.287 "data_offset": 2048, 00:11:16.287 "data_size": 63488 00:11:16.287 }, 00:11:16.287 { 00:11:16.287 "name": "BaseBdev2", 00:11:16.287 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:16.287 "is_configured": true, 00:11:16.287 "data_offset": 2048, 00:11:16.287 "data_size": 63488 00:11:16.287 } 00:11:16.287 ] 00:11:16.287 }' 00:11:16.287 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:16.287 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:16.287 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:16.287 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:16.287 21:43:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:16.287 [2024-11-27 21:43:39.352944] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:11:16.808 139.50 IOPS, 418.50 MiB/s [2024-11-27T21:43:39.929Z] [2024-11-27 21:43:39.725335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:17.379 "name": "raid_bdev1", 00:11:17.379 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:17.379 "strip_size_kb": 0, 00:11:17.379 "state": "online", 00:11:17.379 "raid_level": "raid1", 00:11:17.379 "superblock": true, 00:11:17.379 "num_base_bdevs": 2, 00:11:17.379 "num_base_bdevs_discovered": 2, 00:11:17.379 "num_base_bdevs_operational": 2, 00:11:17.379 "process": { 00:11:17.379 "type": "rebuild", 00:11:17.379 "target": "spare", 00:11:17.379 "progress": { 00:11:17.379 "blocks": 30720, 00:11:17.379 "percent": 48 00:11:17.379 } 00:11:17.379 }, 00:11:17.379 "base_bdevs_list": [ 00:11:17.379 { 00:11:17.379 "name": "spare", 00:11:17.379 "uuid": "17175e5d-80a0-5399-bbc6-01722058854f", 00:11:17.379 "is_configured": true, 00:11:17.379 "data_offset": 2048, 00:11:17.379 "data_size": 63488 00:11:17.379 }, 00:11:17.379 { 00:11:17.379 "name": "BaseBdev2", 00:11:17.379 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:17.379 "is_configured": true, 00:11:17.379 "data_offset": 2048, 00:11:17.379 "data_size": 63488 00:11:17.379 } 00:11:17.379 ] 00:11:17.379 }' 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:17.379 21:43:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:18.242 120.00 IOPS, 360.00 MiB/s [2024-11-27T21:43:41.363Z] [2024-11-27 21:43:41.295903] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:11:18.503 [2024-11-27 21:43:41.413878] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:18.503 "name": "raid_bdev1", 00:11:18.503 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:18.503 "strip_size_kb": 0, 00:11:18.503 "state": "online", 00:11:18.503 "raid_level": "raid1", 00:11:18.503 "superblock": true, 00:11:18.503 "num_base_bdevs": 2, 00:11:18.503 "num_base_bdevs_discovered": 2, 00:11:18.503 "num_base_bdevs_operational": 2, 00:11:18.503 "process": { 00:11:18.503 "type": "rebuild", 00:11:18.503 "target": "spare", 00:11:18.503 "progress": { 00:11:18.503 "blocks": 53248, 00:11:18.503 "percent": 83 00:11:18.503 } 00:11:18.503 }, 00:11:18.503 "base_bdevs_list": [ 00:11:18.503 { 00:11:18.503 "name": "spare", 00:11:18.503 "uuid": "17175e5d-80a0-5399-bbc6-01722058854f", 00:11:18.503 "is_configured": true, 00:11:18.503 "data_offset": 2048, 00:11:18.503 "data_size": 63488 00:11:18.503 }, 00:11:18.503 { 00:11:18.503 "name": "BaseBdev2", 00:11:18.503 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:18.503 "is_configured": true, 00:11:18.503 "data_offset": 2048, 00:11:18.503 "data_size": 63488 00:11:18.503 } 00:11:18.503 ] 00:11:18.503 }' 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:18.503 21:43:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:19.023 106.50 IOPS, 319.50 MiB/s [2024-11-27T21:43:42.144Z] [2024-11-27 21:43:42.058670] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:19.282 [2024-11-27 21:43:42.163883] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:19.282 [2024-11-27 21:43:42.165881] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.542 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:19.543 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:19.543 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.543 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:19.543 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:19.543 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.543 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.543 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.543 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.543 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.543 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.543 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.543 "name": "raid_bdev1", 00:11:19.543 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:19.543 "strip_size_kb": 0, 00:11:19.543 "state": "online", 00:11:19.543 "raid_level": "raid1", 00:11:19.543 "superblock": true, 00:11:19.543 "num_base_bdevs": 2, 00:11:19.543 "num_base_bdevs_discovered": 2, 00:11:19.543 "num_base_bdevs_operational": 2, 00:11:19.543 "base_bdevs_list": [ 00:11:19.543 { 00:11:19.543 "name": "spare", 00:11:19.543 "uuid": "17175e5d-80a0-5399-bbc6-01722058854f", 00:11:19.543 "is_configured": true, 00:11:19.543 "data_offset": 2048, 00:11:19.543 "data_size": 63488 00:11:19.543 }, 00:11:19.543 { 00:11:19.543 "name": "BaseBdev2", 00:11:19.543 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:19.543 "is_configured": true, 00:11:19.543 "data_offset": 2048, 00:11:19.543 "data_size": 63488 00:11:19.543 } 00:11:19.543 ] 00:11:19.543 }' 00:11:19.543 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.543 95.00 IOPS, 285.00 MiB/s [2024-11-27T21:43:42.664Z] 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:19.543 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:19.804 "name": "raid_bdev1", 00:11:19.804 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:19.804 "strip_size_kb": 0, 00:11:19.804 "state": "online", 00:11:19.804 "raid_level": "raid1", 00:11:19.804 "superblock": true, 00:11:19.804 "num_base_bdevs": 2, 00:11:19.804 "num_base_bdevs_discovered": 2, 00:11:19.804 "num_base_bdevs_operational": 2, 00:11:19.804 "base_bdevs_list": [ 00:11:19.804 { 00:11:19.804 "name": "spare", 00:11:19.804 "uuid": "17175e5d-80a0-5399-bbc6-01722058854f", 00:11:19.804 "is_configured": true, 00:11:19.804 "data_offset": 2048, 00:11:19.804 "data_size": 63488 00:11:19.804 }, 00:11:19.804 { 00:11:19.804 "name": "BaseBdev2", 00:11:19.804 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:19.804 "is_configured": true, 00:11:19.804 "data_offset": 2048, 00:11:19.804 "data_size": 63488 00:11:19.804 } 00:11:19.804 ] 00:11:19.804 }' 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.804 "name": "raid_bdev1", 00:11:19.804 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:19.804 "strip_size_kb": 0, 00:11:19.804 "state": "online", 00:11:19.804 "raid_level": "raid1", 00:11:19.804 "superblock": true, 00:11:19.804 "num_base_bdevs": 2, 00:11:19.804 "num_base_bdevs_discovered": 2, 00:11:19.804 "num_base_bdevs_operational": 2, 00:11:19.804 "base_bdevs_list": [ 00:11:19.804 { 00:11:19.804 "name": "spare", 00:11:19.804 "uuid": "17175e5d-80a0-5399-bbc6-01722058854f", 00:11:19.804 "is_configured": true, 00:11:19.804 "data_offset": 2048, 00:11:19.804 "data_size": 63488 00:11:19.804 }, 00:11:19.804 { 00:11:19.804 "name": "BaseBdev2", 00:11:19.804 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:19.804 "is_configured": true, 00:11:19.804 "data_offset": 2048, 00:11:19.804 "data_size": 63488 00:11:19.804 } 00:11:19.804 ] 00:11:19.804 }' 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.804 21:43:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.373 [2024-11-27 21:43:43.263396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:20.373 [2024-11-27 21:43:43.263489] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:20.373 00:11:20.373 Latency(us) 00:11:20.373 [2024-11-27T21:43:43.494Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:20.373 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:11:20.373 raid_bdev1 : 7.74 89.87 269.60 0.00 0.00 14791.96 284.39 115389.15 00:11:20.373 [2024-11-27T21:43:43.494Z] =================================================================================================================== 00:11:20.373 [2024-11-27T21:43:43.494Z] Total : 89.87 269.60 0.00 0.00 14791.96 284.39 115389.15 00:11:20.373 [2024-11-27 21:43:43.346617] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:20.373 [2024-11-27 21:43:43.346735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.373 [2024-11-27 21:43:43.346861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:20.373 [2024-11-27 21:43:43.346922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:20.373 { 00:11:20.373 "results": [ 00:11:20.373 { 00:11:20.373 "job": "raid_bdev1", 00:11:20.373 "core_mask": "0x1", 00:11:20.373 "workload": "randrw", 00:11:20.373 "percentage": 50, 00:11:20.373 "status": "finished", 00:11:20.373 "queue_depth": 2, 00:11:20.373 "io_size": 3145728, 00:11:20.373 "runtime": 7.744772, 00:11:20.373 "iops": 89.86707420179704, 00:11:20.373 "mibps": 269.6012226053911, 00:11:20.373 "io_failed": 0, 00:11:20.373 "io_timeout": 0, 00:11:20.373 "avg_latency_us": 14791.959604477239, 00:11:20.373 "min_latency_us": 284.3947598253275, 00:11:20.373 "max_latency_us": 115389.14934497817 00:11:20.373 } 00:11:20.373 ], 00:11:20.373 "core_count": 1 00:11:20.373 } 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:11:20.373 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:20.374 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:20.374 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:20.374 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:20.374 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:20.374 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:20.374 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:11:20.634 /dev/nbd0 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.634 1+0 records in 00:11:20.634 1+0 records out 00:11:20.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405744 s, 10.1 MB/s 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:20.634 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:11:20.895 /dev/nbd1 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:20.895 1+0 records in 00:11:20.895 1+0 records out 00:11:20.895 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256614 s, 16.0 MB/s 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:20.895 21:43:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:21.155 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.415 [2024-11-27 21:43:44.391616] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:21.415 [2024-11-27 21:43:44.391721] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.415 [2024-11-27 21:43:44.391758] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:11:21.415 [2024-11-27 21:43:44.391772] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.415 [2024-11-27 21:43:44.394146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.415 [2024-11-27 21:43:44.394184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:21.415 [2024-11-27 21:43:44.394265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:21.415 [2024-11-27 21:43:44.394317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:21.415 [2024-11-27 21:43:44.394428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.415 spare 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.415 [2024-11-27 21:43:44.494322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:11:21.415 [2024-11-27 21:43:44.494399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:21.415 [2024-11-27 21:43:44.494732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027720 00:11:21.415 [2024-11-27 21:43:44.494929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:11:21.415 [2024-11-27 21:43:44.494978] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:11:21.415 [2024-11-27 21:43:44.495193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.415 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.416 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:21.416 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.416 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.416 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.416 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.416 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.416 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.416 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.416 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.416 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.676 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.676 "name": "raid_bdev1", 00:11:21.676 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:21.676 "strip_size_kb": 0, 00:11:21.676 "state": "online", 00:11:21.676 "raid_level": "raid1", 00:11:21.676 "superblock": true, 00:11:21.676 "num_base_bdevs": 2, 00:11:21.676 "num_base_bdevs_discovered": 2, 00:11:21.676 "num_base_bdevs_operational": 2, 00:11:21.676 "base_bdevs_list": [ 00:11:21.676 { 00:11:21.676 "name": "spare", 00:11:21.676 "uuid": "17175e5d-80a0-5399-bbc6-01722058854f", 00:11:21.676 "is_configured": true, 00:11:21.676 "data_offset": 2048, 00:11:21.676 "data_size": 63488 00:11:21.676 }, 00:11:21.676 { 00:11:21.676 "name": "BaseBdev2", 00:11:21.676 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:21.676 "is_configured": true, 00:11:21.676 "data_offset": 2048, 00:11:21.676 "data_size": 63488 00:11:21.676 } 00:11:21.676 ] 00:11:21.676 }' 00:11:21.676 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.676 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.936 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:21.936 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:21.936 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:21.936 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:21.936 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:21.936 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.936 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.936 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.936 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:21.936 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.936 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:21.936 "name": "raid_bdev1", 00:11:21.936 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:21.936 "strip_size_kb": 0, 00:11:21.936 "state": "online", 00:11:21.936 "raid_level": "raid1", 00:11:21.936 "superblock": true, 00:11:21.936 "num_base_bdevs": 2, 00:11:21.936 "num_base_bdevs_discovered": 2, 00:11:21.936 "num_base_bdevs_operational": 2, 00:11:21.936 "base_bdevs_list": [ 00:11:21.936 { 00:11:21.936 "name": "spare", 00:11:21.936 "uuid": "17175e5d-80a0-5399-bbc6-01722058854f", 00:11:21.936 "is_configured": true, 00:11:21.936 "data_offset": 2048, 00:11:21.936 "data_size": 63488 00:11:21.936 }, 00:11:21.936 { 00:11:21.936 "name": "BaseBdev2", 00:11:21.936 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:21.937 "is_configured": true, 00:11:21.937 "data_offset": 2048, 00:11:21.937 "data_size": 63488 00:11:21.937 } 00:11:21.937 ] 00:11:21.937 }' 00:11:21.937 21:43:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:21.937 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:21.937 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.196 [2024-11-27 21:43:45.094652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.196 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:22.197 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.197 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.197 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.197 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.197 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.197 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.197 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.197 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:22.197 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.197 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.197 "name": "raid_bdev1", 00:11:22.197 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:22.197 "strip_size_kb": 0, 00:11:22.197 "state": "online", 00:11:22.197 "raid_level": "raid1", 00:11:22.197 "superblock": true, 00:11:22.197 "num_base_bdevs": 2, 00:11:22.197 "num_base_bdevs_discovered": 1, 00:11:22.197 "num_base_bdevs_operational": 1, 00:11:22.197 "base_bdevs_list": [ 00:11:22.197 { 00:11:22.197 "name": null, 00:11:22.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.197 "is_configured": false, 00:11:22.197 "data_offset": 0, 00:11:22.197 "data_size": 63488 00:11:22.197 }, 00:11:22.197 { 00:11:22.197 "name": "BaseBdev2", 00:11:22.197 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:22.197 "is_configured": true, 00:11:22.197 "data_offset": 2048, 00:11:22.197 "data_size": 63488 00:11:22.197 } 00:11:22.197 ] 00:11:22.197 }' 00:11:22.197 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.197 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.456 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:22.456 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.456 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:22.456 [2024-11-27 21:43:45.525997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:22.456 [2024-11-27 21:43:45.526247] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:22.456 [2024-11-27 21:43:45.526307] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:22.456 [2024-11-27 21:43:45.526410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:22.456 [2024-11-27 21:43:45.531613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000277f0 00:11:22.456 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.456 21:43:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:11:22.456 [2024-11-27 21:43:45.533723] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:23.838 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:23.838 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:23.838 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:23.838 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:23.838 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:23.838 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.838 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.838 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.838 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.838 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.838 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:23.838 "name": "raid_bdev1", 00:11:23.838 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:23.838 "strip_size_kb": 0, 00:11:23.838 "state": "online", 00:11:23.838 "raid_level": "raid1", 00:11:23.838 "superblock": true, 00:11:23.838 "num_base_bdevs": 2, 00:11:23.838 "num_base_bdevs_discovered": 2, 00:11:23.838 "num_base_bdevs_operational": 2, 00:11:23.839 "process": { 00:11:23.839 "type": "rebuild", 00:11:23.839 "target": "spare", 00:11:23.839 "progress": { 00:11:23.839 "blocks": 20480, 00:11:23.839 "percent": 32 00:11:23.839 } 00:11:23.839 }, 00:11:23.839 "base_bdevs_list": [ 00:11:23.839 { 00:11:23.839 "name": "spare", 00:11:23.839 "uuid": "17175e5d-80a0-5399-bbc6-01722058854f", 00:11:23.839 "is_configured": true, 00:11:23.839 "data_offset": 2048, 00:11:23.839 "data_size": 63488 00:11:23.839 }, 00:11:23.839 { 00:11:23.839 "name": "BaseBdev2", 00:11:23.839 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:23.839 "is_configured": true, 00:11:23.839 "data_offset": 2048, 00:11:23.839 "data_size": 63488 00:11:23.839 } 00:11:23.839 ] 00:11:23.839 }' 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.839 [2024-11-27 21:43:46.694045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:23.839 [2024-11-27 21:43:46.737902] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:23.839 [2024-11-27 21:43:46.737951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.839 [2024-11-27 21:43:46.737982] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:23.839 [2024-11-27 21:43:46.737989] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.839 "name": "raid_bdev1", 00:11:23.839 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:23.839 "strip_size_kb": 0, 00:11:23.839 "state": "online", 00:11:23.839 "raid_level": "raid1", 00:11:23.839 "superblock": true, 00:11:23.839 "num_base_bdevs": 2, 00:11:23.839 "num_base_bdevs_discovered": 1, 00:11:23.839 "num_base_bdevs_operational": 1, 00:11:23.839 "base_bdevs_list": [ 00:11:23.839 { 00:11:23.839 "name": null, 00:11:23.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.839 "is_configured": false, 00:11:23.839 "data_offset": 0, 00:11:23.839 "data_size": 63488 00:11:23.839 }, 00:11:23.839 { 00:11:23.839 "name": "BaseBdev2", 00:11:23.839 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:23.839 "is_configured": true, 00:11:23.839 "data_offset": 2048, 00:11:23.839 "data_size": 63488 00:11:23.839 } 00:11:23.839 ] 00:11:23.839 }' 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.839 21:43:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.099 21:43:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:24.099 21:43:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.099 21:43:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:24.099 [2024-11-27 21:43:47.194013] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:24.099 [2024-11-27 21:43:47.194113] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.099 [2024-11-27 21:43:47.194156] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:24.099 [2024-11-27 21:43:47.194184] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.099 [2024-11-27 21:43:47.194646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.099 [2024-11-27 21:43:47.194708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:24.099 [2024-11-27 21:43:47.194852] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:24.099 [2024-11-27 21:43:47.194893] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:11:24.099 [2024-11-27 21:43:47.194950] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:11:24.099 [2024-11-27 21:43:47.195004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:24.099 [2024-11-27 21:43:47.200155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000278c0 00:11:24.099 spare 00:11:24.099 21:43:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.099 21:43:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:11:24.099 [2024-11-27 21:43:47.202082] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:25.480 "name": "raid_bdev1", 00:11:25.480 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:25.480 "strip_size_kb": 0, 00:11:25.480 "state": "online", 00:11:25.480 "raid_level": "raid1", 00:11:25.480 "superblock": true, 00:11:25.480 "num_base_bdevs": 2, 00:11:25.480 "num_base_bdevs_discovered": 2, 00:11:25.480 "num_base_bdevs_operational": 2, 00:11:25.480 "process": { 00:11:25.480 "type": "rebuild", 00:11:25.480 "target": "spare", 00:11:25.480 "progress": { 00:11:25.480 "blocks": 20480, 00:11:25.480 "percent": 32 00:11:25.480 } 00:11:25.480 }, 00:11:25.480 "base_bdevs_list": [ 00:11:25.480 { 00:11:25.480 "name": "spare", 00:11:25.480 "uuid": "17175e5d-80a0-5399-bbc6-01722058854f", 00:11:25.480 "is_configured": true, 00:11:25.480 "data_offset": 2048, 00:11:25.480 "data_size": 63488 00:11:25.480 }, 00:11:25.480 { 00:11:25.480 "name": "BaseBdev2", 00:11:25.480 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:25.480 "is_configured": true, 00:11:25.480 "data_offset": 2048, 00:11:25.480 "data_size": 63488 00:11:25.480 } 00:11:25.480 ] 00:11:25.480 }' 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.480 [2024-11-27 21:43:48.370378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:25.480 [2024-11-27 21:43:48.406232] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:25.480 [2024-11-27 21:43:48.406347] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.480 [2024-11-27 21:43:48.406381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:25.480 [2024-11-27 21:43:48.406422] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.480 "name": "raid_bdev1", 00:11:25.480 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:25.480 "strip_size_kb": 0, 00:11:25.480 "state": "online", 00:11:25.480 "raid_level": "raid1", 00:11:25.480 "superblock": true, 00:11:25.480 "num_base_bdevs": 2, 00:11:25.480 "num_base_bdevs_discovered": 1, 00:11:25.480 "num_base_bdevs_operational": 1, 00:11:25.480 "base_bdevs_list": [ 00:11:25.480 { 00:11:25.480 "name": null, 00:11:25.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.480 "is_configured": false, 00:11:25.480 "data_offset": 0, 00:11:25.480 "data_size": 63488 00:11:25.480 }, 00:11:25.480 { 00:11:25.480 "name": "BaseBdev2", 00:11:25.480 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:25.480 "is_configured": true, 00:11:25.480 "data_offset": 2048, 00:11:25.480 "data_size": 63488 00:11:25.480 } 00:11:25.480 ] 00:11:25.480 }' 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.480 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.740 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:25.740 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:25.740 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:25.740 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:25.740 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:25.740 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.740 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.740 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:25.740 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.740 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.000 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:26.000 "name": "raid_bdev1", 00:11:26.000 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:26.000 "strip_size_kb": 0, 00:11:26.000 "state": "online", 00:11:26.000 "raid_level": "raid1", 00:11:26.000 "superblock": true, 00:11:26.000 "num_base_bdevs": 2, 00:11:26.000 "num_base_bdevs_discovered": 1, 00:11:26.000 "num_base_bdevs_operational": 1, 00:11:26.000 "base_bdevs_list": [ 00:11:26.000 { 00:11:26.000 "name": null, 00:11:26.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.000 "is_configured": false, 00:11:26.000 "data_offset": 0, 00:11:26.000 "data_size": 63488 00:11:26.000 }, 00:11:26.000 { 00:11:26.000 "name": "BaseBdev2", 00:11:26.000 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:26.000 "is_configured": true, 00:11:26.000 "data_offset": 2048, 00:11:26.000 "data_size": 63488 00:11:26.000 } 00:11:26.000 ] 00:11:26.000 }' 00:11:26.000 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:26.000 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:26.000 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:26.000 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:26.000 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:11:26.000 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.000 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.000 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.000 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:26.000 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.000 21:43:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.000 [2024-11-27 21:43:48.998377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:26.000 [2024-11-27 21:43:48.998488] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.000 [2024-11-27 21:43:48.998527] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:26.000 [2024-11-27 21:43:48.998566] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.000 [2024-11-27 21:43:48.999013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.000 [2024-11-27 21:43:48.999070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:26.000 [2024-11-27 21:43:48.999179] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:11:26.000 [2024-11-27 21:43:48.999234] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:26.000 [2024-11-27 21:43:48.999278] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:26.000 [2024-11-27 21:43:48.999315] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:11:26.000 BaseBdev1 00:11:26.000 21:43:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.000 21:43:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.950 "name": "raid_bdev1", 00:11:26.950 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:26.950 "strip_size_kb": 0, 00:11:26.950 "state": "online", 00:11:26.950 "raid_level": "raid1", 00:11:26.950 "superblock": true, 00:11:26.950 "num_base_bdevs": 2, 00:11:26.950 "num_base_bdevs_discovered": 1, 00:11:26.950 "num_base_bdevs_operational": 1, 00:11:26.950 "base_bdevs_list": [ 00:11:26.950 { 00:11:26.950 "name": null, 00:11:26.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.950 "is_configured": false, 00:11:26.950 "data_offset": 0, 00:11:26.950 "data_size": 63488 00:11:26.950 }, 00:11:26.950 { 00:11:26.950 "name": "BaseBdev2", 00:11:26.950 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:26.950 "is_configured": true, 00:11:26.950 "data_offset": 2048, 00:11:26.950 "data_size": 63488 00:11:26.950 } 00:11:26.950 ] 00:11:26.950 }' 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.950 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:27.520 "name": "raid_bdev1", 00:11:27.520 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:27.520 "strip_size_kb": 0, 00:11:27.520 "state": "online", 00:11:27.520 "raid_level": "raid1", 00:11:27.520 "superblock": true, 00:11:27.520 "num_base_bdevs": 2, 00:11:27.520 "num_base_bdevs_discovered": 1, 00:11:27.520 "num_base_bdevs_operational": 1, 00:11:27.520 "base_bdevs_list": [ 00:11:27.520 { 00:11:27.520 "name": null, 00:11:27.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.520 "is_configured": false, 00:11:27.520 "data_offset": 0, 00:11:27.520 "data_size": 63488 00:11:27.520 }, 00:11:27.520 { 00:11:27.520 "name": "BaseBdev2", 00:11:27.520 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:27.520 "is_configured": true, 00:11:27.520 "data_offset": 2048, 00:11:27.520 "data_size": 63488 00:11:27.520 } 00:11:27.520 ] 00:11:27.520 }' 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:27.520 [2024-11-27 21:43:50.576238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.520 [2024-11-27 21:43:50.576451] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:11:27.520 [2024-11-27 21:43:50.576508] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:11:27.520 request: 00:11:27.520 { 00:11:27.520 "base_bdev": "BaseBdev1", 00:11:27.520 "raid_bdev": "raid_bdev1", 00:11:27.520 "method": "bdev_raid_add_base_bdev", 00:11:27.520 "req_id": 1 00:11:27.520 } 00:11:27.520 Got JSON-RPC error response 00:11:27.520 response: 00:11:27.520 { 00:11:27.520 "code": -22, 00:11:27.520 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:11:27.520 } 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:27.520 21:43:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:11:28.902 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:28.902 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.902 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.902 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.902 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.903 "name": "raid_bdev1", 00:11:28.903 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:28.903 "strip_size_kb": 0, 00:11:28.903 "state": "online", 00:11:28.903 "raid_level": "raid1", 00:11:28.903 "superblock": true, 00:11:28.903 "num_base_bdevs": 2, 00:11:28.903 "num_base_bdevs_discovered": 1, 00:11:28.903 "num_base_bdevs_operational": 1, 00:11:28.903 "base_bdevs_list": [ 00:11:28.903 { 00:11:28.903 "name": null, 00:11:28.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.903 "is_configured": false, 00:11:28.903 "data_offset": 0, 00:11:28.903 "data_size": 63488 00:11:28.903 }, 00:11:28.903 { 00:11:28.903 "name": "BaseBdev2", 00:11:28.903 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:28.903 "is_configured": true, 00:11:28.903 "data_offset": 2048, 00:11:28.903 "data_size": 63488 00:11:28.903 } 00:11:28.903 ] 00:11:28.903 }' 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.903 21:43:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.163 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:29.163 "name": "raid_bdev1", 00:11:29.163 "uuid": "c9ae4145-21c3-4972-a40c-8eab74ff47b1", 00:11:29.163 "strip_size_kb": 0, 00:11:29.163 "state": "online", 00:11:29.163 "raid_level": "raid1", 00:11:29.163 "superblock": true, 00:11:29.163 "num_base_bdevs": 2, 00:11:29.163 "num_base_bdevs_discovered": 1, 00:11:29.163 "num_base_bdevs_operational": 1, 00:11:29.163 "base_bdevs_list": [ 00:11:29.163 { 00:11:29.163 "name": null, 00:11:29.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.163 "is_configured": false, 00:11:29.163 "data_offset": 0, 00:11:29.163 "data_size": 63488 00:11:29.163 }, 00:11:29.163 { 00:11:29.163 "name": "BaseBdev2", 00:11:29.163 "uuid": "fe62dcb4-54be-58bd-b1bb-621853693795", 00:11:29.163 "is_configured": true, 00:11:29.163 "data_offset": 2048, 00:11:29.163 "data_size": 63488 00:11:29.163 } 00:11:29.163 ] 00:11:29.163 }' 00:11:29.163 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:29.163 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:29.163 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:29.163 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:29.163 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 87194 00:11:29.163 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 87194 ']' 00:11:29.163 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 87194 00:11:29.163 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:11:29.164 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.164 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87194 00:11:29.164 killing process with pid 87194 00:11:29.164 Received shutdown signal, test time was about 16.551084 seconds 00:11:29.164 00:11:29.164 Latency(us) 00:11:29.164 [2024-11-27T21:43:52.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:29.164 [2024-11-27T21:43:52.285Z] =================================================================================================================== 00:11:29.164 [2024-11-27T21:43:52.285Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:29.164 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.164 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.164 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87194' 00:11:29.164 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 87194 00:11:29.164 [2024-11-27 21:43:52.132548] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.164 [2024-11-27 21:43:52.132671] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.164 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 87194 00:11:29.164 [2024-11-27 21:43:52.132732] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.164 [2024-11-27 21:43:52.132742] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:11:29.164 [2024-11-27 21:43:52.158373] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:29.423 ************************************ 00:11:29.423 END TEST raid_rebuild_test_sb_io 00:11:29.423 ************************************ 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:11:29.423 00:11:29.423 real 0m18.370s 00:11:29.423 user 0m24.504s 00:11:29.423 sys 0m1.955s 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:11:29.423 21:43:52 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:29.423 21:43:52 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:11:29.423 21:43:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:29.423 21:43:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.423 21:43:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:29.423 ************************************ 00:11:29.423 START TEST raid_rebuild_test 00:11:29.423 ************************************ 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:29.423 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:29.424 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=87869 00:11:29.424 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:29.424 21:43:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 87869 00:11:29.424 21:43:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 87869 ']' 00:11:29.424 21:43:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.424 21:43:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.424 21:43:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.424 21:43:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.424 21:43:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.424 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:29.424 Zero copy mechanism will not be used. 00:11:29.424 [2024-11-27 21:43:52.505793] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:11:29.424 [2024-11-27 21:43:52.506004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87869 ] 00:11:29.683 [2024-11-27 21:43:52.659458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.683 [2024-11-27 21:43:52.687682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.683 [2024-11-27 21:43:52.730170] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:29.683 [2024-11-27 21:43:52.730290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.253 BaseBdev1_malloc 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.253 [2024-11-27 21:43:53.357773] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:30.253 [2024-11-27 21:43:53.357842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.253 [2024-11-27 21:43:53.357870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:30.253 [2024-11-27 21:43:53.357882] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.253 [2024-11-27 21:43:53.359939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.253 [2024-11-27 21:43:53.360008] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:30.253 BaseBdev1 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.253 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.514 BaseBdev2_malloc 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.514 [2024-11-27 21:43:53.386377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:30.514 [2024-11-27 21:43:53.386432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.514 [2024-11-27 21:43:53.386454] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:30.514 [2024-11-27 21:43:53.386463] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.514 [2024-11-27 21:43:53.388544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.514 [2024-11-27 21:43:53.388586] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:30.514 BaseBdev2 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.514 BaseBdev3_malloc 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.514 [2024-11-27 21:43:53.414864] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:30.514 [2024-11-27 21:43:53.414913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.514 [2024-11-27 21:43:53.414936] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:30.514 [2024-11-27 21:43:53.414945] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.514 [2024-11-27 21:43:53.417034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.514 [2024-11-27 21:43:53.417123] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:30.514 BaseBdev3 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.514 BaseBdev4_malloc 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.514 [2024-11-27 21:43:53.451419] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:30.514 [2024-11-27 21:43:53.451465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.514 [2024-11-27 21:43:53.451487] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:30.514 [2024-11-27 21:43:53.451496] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.514 [2024-11-27 21:43:53.453530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.514 [2024-11-27 21:43:53.453565] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:30.514 BaseBdev4 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.514 spare_malloc 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.514 spare_delay 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.514 [2024-11-27 21:43:53.491736] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:30.514 [2024-11-27 21:43:53.491781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:30.514 [2024-11-27 21:43:53.491811] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:30.514 [2024-11-27 21:43:53.491820] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:30.514 [2024-11-27 21:43:53.493935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:30.514 [2024-11-27 21:43:53.493966] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:30.514 spare 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.514 [2024-11-27 21:43:53.503787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:30.514 [2024-11-27 21:43:53.505558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.514 [2024-11-27 21:43:53.505616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:30.514 [2024-11-27 21:43:53.505662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:30.514 [2024-11-27 21:43:53.505737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:30.514 [2024-11-27 21:43:53.505745] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:30.514 [2024-11-27 21:43:53.505986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:30.514 [2024-11-27 21:43:53.506119] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:30.514 [2024-11-27 21:43:53.506131] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:30.514 [2024-11-27 21:43:53.506256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.514 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.514 "name": "raid_bdev1", 00:11:30.514 "uuid": "2c6ae5db-d390-4e7c-8773-99a5ee5c6c0c", 00:11:30.514 "strip_size_kb": 0, 00:11:30.514 "state": "online", 00:11:30.514 "raid_level": "raid1", 00:11:30.514 "superblock": false, 00:11:30.515 "num_base_bdevs": 4, 00:11:30.515 "num_base_bdevs_discovered": 4, 00:11:30.515 "num_base_bdevs_operational": 4, 00:11:30.515 "base_bdevs_list": [ 00:11:30.515 { 00:11:30.515 "name": "BaseBdev1", 00:11:30.515 "uuid": "48446d1d-50a9-5c51-9e11-aa7c582c73e8", 00:11:30.515 "is_configured": true, 00:11:30.515 "data_offset": 0, 00:11:30.515 "data_size": 65536 00:11:30.515 }, 00:11:30.515 { 00:11:30.515 "name": "BaseBdev2", 00:11:30.515 "uuid": "fe34795a-7205-5fbc-a7dd-bec3e6ccb8c4", 00:11:30.515 "is_configured": true, 00:11:30.515 "data_offset": 0, 00:11:30.515 "data_size": 65536 00:11:30.515 }, 00:11:30.515 { 00:11:30.515 "name": "BaseBdev3", 00:11:30.515 "uuid": "fed75da6-1f82-55fc-a73f-a66c9b440fde", 00:11:30.515 "is_configured": true, 00:11:30.515 "data_offset": 0, 00:11:30.515 "data_size": 65536 00:11:30.515 }, 00:11:30.515 { 00:11:30.515 "name": "BaseBdev4", 00:11:30.515 "uuid": "0c824279-bd40-5567-b60b-b6ecbdfd962b", 00:11:30.515 "is_configured": true, 00:11:30.515 "data_offset": 0, 00:11:30.515 "data_size": 65536 00:11:30.515 } 00:11:30.515 ] 00:11:30.515 }' 00:11:30.515 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.515 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.085 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:31.085 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:31.085 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.085 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.085 [2024-11-27 21:43:53.943323] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:31.085 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.085 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:31.085 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.085 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.085 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.085 21:43:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:31.085 21:43:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.085 21:43:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:31.085 21:43:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:31.085 21:43:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:31.085 21:43:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:31.085 21:43:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:31.085 21:43:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:31.085 21:43:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:31.085 21:43:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:31.085 21:43:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:31.085 21:43:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:31.085 21:43:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:31.085 21:43:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:31.085 21:43:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:31.085 21:43:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:31.345 [2024-11-27 21:43:54.218602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:31.345 /dev/nbd0 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:31.345 1+0 records in 00:11:31.345 1+0 records out 00:11:31.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382933 s, 10.7 MB/s 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:31.345 21:43:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:36.622 65536+0 records in 00:11:36.622 65536+0 records out 00:11:36.622 33554432 bytes (34 MB, 32 MiB) copied, 5.24376 s, 6.4 MB/s 00:11:36.622 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:36.622 21:43:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:36.622 21:43:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:36.622 21:43:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:36.622 21:43:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:36.622 21:43:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:36.622 21:43:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:36.622 [2024-11-27 21:43:59.727261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.622 21:43:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.881 [2024-11-27 21:43:59.759301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.881 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.881 "name": "raid_bdev1", 00:11:36.881 "uuid": "2c6ae5db-d390-4e7c-8773-99a5ee5c6c0c", 00:11:36.881 "strip_size_kb": 0, 00:11:36.881 "state": "online", 00:11:36.881 "raid_level": "raid1", 00:11:36.881 "superblock": false, 00:11:36.881 "num_base_bdevs": 4, 00:11:36.882 "num_base_bdevs_discovered": 3, 00:11:36.882 "num_base_bdevs_operational": 3, 00:11:36.882 "base_bdevs_list": [ 00:11:36.882 { 00:11:36.882 "name": null, 00:11:36.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.882 "is_configured": false, 00:11:36.882 "data_offset": 0, 00:11:36.882 "data_size": 65536 00:11:36.882 }, 00:11:36.882 { 00:11:36.882 "name": "BaseBdev2", 00:11:36.882 "uuid": "fe34795a-7205-5fbc-a7dd-bec3e6ccb8c4", 00:11:36.882 "is_configured": true, 00:11:36.882 "data_offset": 0, 00:11:36.882 "data_size": 65536 00:11:36.882 }, 00:11:36.882 { 00:11:36.882 "name": "BaseBdev3", 00:11:36.882 "uuid": "fed75da6-1f82-55fc-a73f-a66c9b440fde", 00:11:36.882 "is_configured": true, 00:11:36.882 "data_offset": 0, 00:11:36.882 "data_size": 65536 00:11:36.882 }, 00:11:36.882 { 00:11:36.882 "name": "BaseBdev4", 00:11:36.882 "uuid": "0c824279-bd40-5567-b60b-b6ecbdfd962b", 00:11:36.882 "is_configured": true, 00:11:36.882 "data_offset": 0, 00:11:36.882 "data_size": 65536 00:11:36.882 } 00:11:36.882 ] 00:11:36.882 }' 00:11:36.882 21:43:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.882 21:43:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.141 21:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:37.141 21:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.141 21:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.141 [2024-11-27 21:44:00.182588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:37.141 [2024-11-27 21:44:00.186811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d063c0 00:11:37.141 21:44:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.141 21:44:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:37.141 [2024-11-27 21:44:00.188864] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:38.076 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:38.076 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.076 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:38.076 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:38.076 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.335 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.335 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.335 21:44:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.335 21:44:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.335 21:44:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.335 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.335 "name": "raid_bdev1", 00:11:38.335 "uuid": "2c6ae5db-d390-4e7c-8773-99a5ee5c6c0c", 00:11:38.335 "strip_size_kb": 0, 00:11:38.335 "state": "online", 00:11:38.335 "raid_level": "raid1", 00:11:38.335 "superblock": false, 00:11:38.335 "num_base_bdevs": 4, 00:11:38.335 "num_base_bdevs_discovered": 4, 00:11:38.335 "num_base_bdevs_operational": 4, 00:11:38.335 "process": { 00:11:38.335 "type": "rebuild", 00:11:38.335 "target": "spare", 00:11:38.335 "progress": { 00:11:38.335 "blocks": 20480, 00:11:38.335 "percent": 31 00:11:38.335 } 00:11:38.335 }, 00:11:38.335 "base_bdevs_list": [ 00:11:38.335 { 00:11:38.335 "name": "spare", 00:11:38.335 "uuid": "7bcbbae7-e39b-5993-b5d3-c0920c282a54", 00:11:38.335 "is_configured": true, 00:11:38.335 "data_offset": 0, 00:11:38.335 "data_size": 65536 00:11:38.335 }, 00:11:38.335 { 00:11:38.335 "name": "BaseBdev2", 00:11:38.335 "uuid": "fe34795a-7205-5fbc-a7dd-bec3e6ccb8c4", 00:11:38.335 "is_configured": true, 00:11:38.335 "data_offset": 0, 00:11:38.335 "data_size": 65536 00:11:38.335 }, 00:11:38.335 { 00:11:38.335 "name": "BaseBdev3", 00:11:38.335 "uuid": "fed75da6-1f82-55fc-a73f-a66c9b440fde", 00:11:38.335 "is_configured": true, 00:11:38.335 "data_offset": 0, 00:11:38.335 "data_size": 65536 00:11:38.335 }, 00:11:38.335 { 00:11:38.335 "name": "BaseBdev4", 00:11:38.335 "uuid": "0c824279-bd40-5567-b60b-b6ecbdfd962b", 00:11:38.335 "is_configured": true, 00:11:38.335 "data_offset": 0, 00:11:38.335 "data_size": 65536 00:11:38.335 } 00:11:38.335 ] 00:11:38.335 }' 00:11:38.335 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.335 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:38.335 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.335 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:38.335 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:38.335 21:44:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.335 21:44:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.335 [2024-11-27 21:44:01.337623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:38.335 [2024-11-27 21:44:01.393514] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:38.335 [2024-11-27 21:44:01.393631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.336 [2024-11-27 21:44:01.393671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:38.336 [2024-11-27 21:44:01.393703] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.336 "name": "raid_bdev1", 00:11:38.336 "uuid": "2c6ae5db-d390-4e7c-8773-99a5ee5c6c0c", 00:11:38.336 "strip_size_kb": 0, 00:11:38.336 "state": "online", 00:11:38.336 "raid_level": "raid1", 00:11:38.336 "superblock": false, 00:11:38.336 "num_base_bdevs": 4, 00:11:38.336 "num_base_bdevs_discovered": 3, 00:11:38.336 "num_base_bdevs_operational": 3, 00:11:38.336 "base_bdevs_list": [ 00:11:38.336 { 00:11:38.336 "name": null, 00:11:38.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.336 "is_configured": false, 00:11:38.336 "data_offset": 0, 00:11:38.336 "data_size": 65536 00:11:38.336 }, 00:11:38.336 { 00:11:38.336 "name": "BaseBdev2", 00:11:38.336 "uuid": "fe34795a-7205-5fbc-a7dd-bec3e6ccb8c4", 00:11:38.336 "is_configured": true, 00:11:38.336 "data_offset": 0, 00:11:38.336 "data_size": 65536 00:11:38.336 }, 00:11:38.336 { 00:11:38.336 "name": "BaseBdev3", 00:11:38.336 "uuid": "fed75da6-1f82-55fc-a73f-a66c9b440fde", 00:11:38.336 "is_configured": true, 00:11:38.336 "data_offset": 0, 00:11:38.336 "data_size": 65536 00:11:38.336 }, 00:11:38.336 { 00:11:38.336 "name": "BaseBdev4", 00:11:38.336 "uuid": "0c824279-bd40-5567-b60b-b6ecbdfd962b", 00:11:38.336 "is_configured": true, 00:11:38.336 "data_offset": 0, 00:11:38.336 "data_size": 65536 00:11:38.336 } 00:11:38.336 ] 00:11:38.336 }' 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.336 21:44:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:38.905 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:38.905 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:38.905 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:38.905 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:38.905 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.905 21:44:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.905 21:44:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.905 21:44:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.905 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:38.905 "name": "raid_bdev1", 00:11:38.905 "uuid": "2c6ae5db-d390-4e7c-8773-99a5ee5c6c0c", 00:11:38.905 "strip_size_kb": 0, 00:11:38.905 "state": "online", 00:11:38.905 "raid_level": "raid1", 00:11:38.905 "superblock": false, 00:11:38.905 "num_base_bdevs": 4, 00:11:38.905 "num_base_bdevs_discovered": 3, 00:11:38.905 "num_base_bdevs_operational": 3, 00:11:38.905 "base_bdevs_list": [ 00:11:38.905 { 00:11:38.905 "name": null, 00:11:38.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.905 "is_configured": false, 00:11:38.906 "data_offset": 0, 00:11:38.906 "data_size": 65536 00:11:38.906 }, 00:11:38.906 { 00:11:38.906 "name": "BaseBdev2", 00:11:38.906 "uuid": "fe34795a-7205-5fbc-a7dd-bec3e6ccb8c4", 00:11:38.906 "is_configured": true, 00:11:38.906 "data_offset": 0, 00:11:38.906 "data_size": 65536 00:11:38.906 }, 00:11:38.906 { 00:11:38.906 "name": "BaseBdev3", 00:11:38.906 "uuid": "fed75da6-1f82-55fc-a73f-a66c9b440fde", 00:11:38.906 "is_configured": true, 00:11:38.906 "data_offset": 0, 00:11:38.906 "data_size": 65536 00:11:38.906 }, 00:11:38.906 { 00:11:38.906 "name": "BaseBdev4", 00:11:38.906 "uuid": "0c824279-bd40-5567-b60b-b6ecbdfd962b", 00:11:38.906 "is_configured": true, 00:11:38.906 "data_offset": 0, 00:11:38.906 "data_size": 65536 00:11:38.906 } 00:11:38.906 ] 00:11:38.906 }' 00:11:38.906 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:38.906 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:38.906 21:44:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:38.906 21:44:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:38.906 21:44:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:38.906 21:44:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.906 21:44:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.906 [2024-11-27 21:44:02.017080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:38.906 [2024-11-27 21:44:02.021239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d06490 00:11:38.906 21:44:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.906 21:44:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:38.906 [2024-11-27 21:44:02.023198] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.286 "name": "raid_bdev1", 00:11:40.286 "uuid": "2c6ae5db-d390-4e7c-8773-99a5ee5c6c0c", 00:11:40.286 "strip_size_kb": 0, 00:11:40.286 "state": "online", 00:11:40.286 "raid_level": "raid1", 00:11:40.286 "superblock": false, 00:11:40.286 "num_base_bdevs": 4, 00:11:40.286 "num_base_bdevs_discovered": 4, 00:11:40.286 "num_base_bdevs_operational": 4, 00:11:40.286 "process": { 00:11:40.286 "type": "rebuild", 00:11:40.286 "target": "spare", 00:11:40.286 "progress": { 00:11:40.286 "blocks": 20480, 00:11:40.286 "percent": 31 00:11:40.286 } 00:11:40.286 }, 00:11:40.286 "base_bdevs_list": [ 00:11:40.286 { 00:11:40.286 "name": "spare", 00:11:40.286 "uuid": "7bcbbae7-e39b-5993-b5d3-c0920c282a54", 00:11:40.286 "is_configured": true, 00:11:40.286 "data_offset": 0, 00:11:40.286 "data_size": 65536 00:11:40.286 }, 00:11:40.286 { 00:11:40.286 "name": "BaseBdev2", 00:11:40.286 "uuid": "fe34795a-7205-5fbc-a7dd-bec3e6ccb8c4", 00:11:40.286 "is_configured": true, 00:11:40.286 "data_offset": 0, 00:11:40.286 "data_size": 65536 00:11:40.286 }, 00:11:40.286 { 00:11:40.286 "name": "BaseBdev3", 00:11:40.286 "uuid": "fed75da6-1f82-55fc-a73f-a66c9b440fde", 00:11:40.286 "is_configured": true, 00:11:40.286 "data_offset": 0, 00:11:40.286 "data_size": 65536 00:11:40.286 }, 00:11:40.286 { 00:11:40.286 "name": "BaseBdev4", 00:11:40.286 "uuid": "0c824279-bd40-5567-b60b-b6ecbdfd962b", 00:11:40.286 "is_configured": true, 00:11:40.286 "data_offset": 0, 00:11:40.286 "data_size": 65536 00:11:40.286 } 00:11:40.286 ] 00:11:40.286 }' 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.286 [2024-11-27 21:44:03.155772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:40.286 [2024-11-27 21:44:03.227278] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d06490 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.286 "name": "raid_bdev1", 00:11:40.286 "uuid": "2c6ae5db-d390-4e7c-8773-99a5ee5c6c0c", 00:11:40.286 "strip_size_kb": 0, 00:11:40.286 "state": "online", 00:11:40.286 "raid_level": "raid1", 00:11:40.286 "superblock": false, 00:11:40.286 "num_base_bdevs": 4, 00:11:40.286 "num_base_bdevs_discovered": 3, 00:11:40.286 "num_base_bdevs_operational": 3, 00:11:40.286 "process": { 00:11:40.286 "type": "rebuild", 00:11:40.286 "target": "spare", 00:11:40.286 "progress": { 00:11:40.286 "blocks": 24576, 00:11:40.286 "percent": 37 00:11:40.286 } 00:11:40.286 }, 00:11:40.286 "base_bdevs_list": [ 00:11:40.286 { 00:11:40.286 "name": "spare", 00:11:40.286 "uuid": "7bcbbae7-e39b-5993-b5d3-c0920c282a54", 00:11:40.286 "is_configured": true, 00:11:40.286 "data_offset": 0, 00:11:40.286 "data_size": 65536 00:11:40.286 }, 00:11:40.286 { 00:11:40.286 "name": null, 00:11:40.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.286 "is_configured": false, 00:11:40.286 "data_offset": 0, 00:11:40.286 "data_size": 65536 00:11:40.286 }, 00:11:40.286 { 00:11:40.286 "name": "BaseBdev3", 00:11:40.286 "uuid": "fed75da6-1f82-55fc-a73f-a66c9b440fde", 00:11:40.286 "is_configured": true, 00:11:40.286 "data_offset": 0, 00:11:40.286 "data_size": 65536 00:11:40.286 }, 00:11:40.286 { 00:11:40.286 "name": "BaseBdev4", 00:11:40.286 "uuid": "0c824279-bd40-5567-b60b-b6ecbdfd962b", 00:11:40.286 "is_configured": true, 00:11:40.286 "data_offset": 0, 00:11:40.286 "data_size": 65536 00:11:40.286 } 00:11:40.286 ] 00:11:40.286 }' 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=352 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.286 21:44:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.547 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:40.547 "name": "raid_bdev1", 00:11:40.547 "uuid": "2c6ae5db-d390-4e7c-8773-99a5ee5c6c0c", 00:11:40.547 "strip_size_kb": 0, 00:11:40.547 "state": "online", 00:11:40.547 "raid_level": "raid1", 00:11:40.547 "superblock": false, 00:11:40.547 "num_base_bdevs": 4, 00:11:40.547 "num_base_bdevs_discovered": 3, 00:11:40.547 "num_base_bdevs_operational": 3, 00:11:40.547 "process": { 00:11:40.547 "type": "rebuild", 00:11:40.547 "target": "spare", 00:11:40.547 "progress": { 00:11:40.547 "blocks": 26624, 00:11:40.547 "percent": 40 00:11:40.547 } 00:11:40.547 }, 00:11:40.547 "base_bdevs_list": [ 00:11:40.547 { 00:11:40.547 "name": "spare", 00:11:40.547 "uuid": "7bcbbae7-e39b-5993-b5d3-c0920c282a54", 00:11:40.547 "is_configured": true, 00:11:40.547 "data_offset": 0, 00:11:40.547 "data_size": 65536 00:11:40.547 }, 00:11:40.547 { 00:11:40.547 "name": null, 00:11:40.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.547 "is_configured": false, 00:11:40.547 "data_offset": 0, 00:11:40.547 "data_size": 65536 00:11:40.547 }, 00:11:40.547 { 00:11:40.547 "name": "BaseBdev3", 00:11:40.547 "uuid": "fed75da6-1f82-55fc-a73f-a66c9b440fde", 00:11:40.547 "is_configured": true, 00:11:40.547 "data_offset": 0, 00:11:40.547 "data_size": 65536 00:11:40.547 }, 00:11:40.547 { 00:11:40.547 "name": "BaseBdev4", 00:11:40.547 "uuid": "0c824279-bd40-5567-b60b-b6ecbdfd962b", 00:11:40.547 "is_configured": true, 00:11:40.547 "data_offset": 0, 00:11:40.547 "data_size": 65536 00:11:40.547 } 00:11:40.547 ] 00:11:40.547 }' 00:11:40.547 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:40.547 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:40.547 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:40.547 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:40.547 21:44:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:41.485 21:44:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:41.485 21:44:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:41.485 21:44:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:41.485 21:44:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:41.485 21:44:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:41.485 21:44:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:41.485 21:44:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.486 21:44:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.486 21:44:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.486 21:44:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.486 21:44:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.486 21:44:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:41.486 "name": "raid_bdev1", 00:11:41.486 "uuid": "2c6ae5db-d390-4e7c-8773-99a5ee5c6c0c", 00:11:41.486 "strip_size_kb": 0, 00:11:41.486 "state": "online", 00:11:41.486 "raid_level": "raid1", 00:11:41.486 "superblock": false, 00:11:41.486 "num_base_bdevs": 4, 00:11:41.486 "num_base_bdevs_discovered": 3, 00:11:41.486 "num_base_bdevs_operational": 3, 00:11:41.486 "process": { 00:11:41.486 "type": "rebuild", 00:11:41.486 "target": "spare", 00:11:41.486 "progress": { 00:11:41.486 "blocks": 51200, 00:11:41.486 "percent": 78 00:11:41.486 } 00:11:41.486 }, 00:11:41.486 "base_bdevs_list": [ 00:11:41.486 { 00:11:41.486 "name": "spare", 00:11:41.486 "uuid": "7bcbbae7-e39b-5993-b5d3-c0920c282a54", 00:11:41.486 "is_configured": true, 00:11:41.486 "data_offset": 0, 00:11:41.486 "data_size": 65536 00:11:41.486 }, 00:11:41.486 { 00:11:41.486 "name": null, 00:11:41.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.486 "is_configured": false, 00:11:41.486 "data_offset": 0, 00:11:41.486 "data_size": 65536 00:11:41.486 }, 00:11:41.486 { 00:11:41.486 "name": "BaseBdev3", 00:11:41.486 "uuid": "fed75da6-1f82-55fc-a73f-a66c9b440fde", 00:11:41.486 "is_configured": true, 00:11:41.486 "data_offset": 0, 00:11:41.486 "data_size": 65536 00:11:41.486 }, 00:11:41.486 { 00:11:41.486 "name": "BaseBdev4", 00:11:41.486 "uuid": "0c824279-bd40-5567-b60b-b6ecbdfd962b", 00:11:41.486 "is_configured": true, 00:11:41.486 "data_offset": 0, 00:11:41.486 "data_size": 65536 00:11:41.486 } 00:11:41.486 ] 00:11:41.486 }' 00:11:41.486 21:44:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:41.745 21:44:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:41.745 21:44:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:41.745 21:44:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:41.745 21:44:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:42.314 [2024-11-27 21:44:05.235126] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:42.314 [2024-11-27 21:44:05.235333] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:42.314 [2024-11-27 21:44:05.235449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.574 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:42.574 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:42.574 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.574 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:42.574 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:42.574 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.574 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.574 21:44:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.574 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.574 21:44:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.574 21:44:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.834 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.834 "name": "raid_bdev1", 00:11:42.834 "uuid": "2c6ae5db-d390-4e7c-8773-99a5ee5c6c0c", 00:11:42.834 "strip_size_kb": 0, 00:11:42.834 "state": "online", 00:11:42.834 "raid_level": "raid1", 00:11:42.834 "superblock": false, 00:11:42.834 "num_base_bdevs": 4, 00:11:42.834 "num_base_bdevs_discovered": 3, 00:11:42.835 "num_base_bdevs_operational": 3, 00:11:42.835 "base_bdevs_list": [ 00:11:42.835 { 00:11:42.835 "name": "spare", 00:11:42.835 "uuid": "7bcbbae7-e39b-5993-b5d3-c0920c282a54", 00:11:42.835 "is_configured": true, 00:11:42.835 "data_offset": 0, 00:11:42.835 "data_size": 65536 00:11:42.835 }, 00:11:42.835 { 00:11:42.835 "name": null, 00:11:42.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.835 "is_configured": false, 00:11:42.835 "data_offset": 0, 00:11:42.835 "data_size": 65536 00:11:42.835 }, 00:11:42.835 { 00:11:42.835 "name": "BaseBdev3", 00:11:42.835 "uuid": "fed75da6-1f82-55fc-a73f-a66c9b440fde", 00:11:42.835 "is_configured": true, 00:11:42.835 "data_offset": 0, 00:11:42.835 "data_size": 65536 00:11:42.835 }, 00:11:42.835 { 00:11:42.835 "name": "BaseBdev4", 00:11:42.835 "uuid": "0c824279-bd40-5567-b60b-b6ecbdfd962b", 00:11:42.835 "is_configured": true, 00:11:42.835 "data_offset": 0, 00:11:42.835 "data_size": 65536 00:11:42.835 } 00:11:42.835 ] 00:11:42.835 }' 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:42.835 "name": "raid_bdev1", 00:11:42.835 "uuid": "2c6ae5db-d390-4e7c-8773-99a5ee5c6c0c", 00:11:42.835 "strip_size_kb": 0, 00:11:42.835 "state": "online", 00:11:42.835 "raid_level": "raid1", 00:11:42.835 "superblock": false, 00:11:42.835 "num_base_bdevs": 4, 00:11:42.835 "num_base_bdevs_discovered": 3, 00:11:42.835 "num_base_bdevs_operational": 3, 00:11:42.835 "base_bdevs_list": [ 00:11:42.835 { 00:11:42.835 "name": "spare", 00:11:42.835 "uuid": "7bcbbae7-e39b-5993-b5d3-c0920c282a54", 00:11:42.835 "is_configured": true, 00:11:42.835 "data_offset": 0, 00:11:42.835 "data_size": 65536 00:11:42.835 }, 00:11:42.835 { 00:11:42.835 "name": null, 00:11:42.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.835 "is_configured": false, 00:11:42.835 "data_offset": 0, 00:11:42.835 "data_size": 65536 00:11:42.835 }, 00:11:42.835 { 00:11:42.835 "name": "BaseBdev3", 00:11:42.835 "uuid": "fed75da6-1f82-55fc-a73f-a66c9b440fde", 00:11:42.835 "is_configured": true, 00:11:42.835 "data_offset": 0, 00:11:42.835 "data_size": 65536 00:11:42.835 }, 00:11:42.835 { 00:11:42.835 "name": "BaseBdev4", 00:11:42.835 "uuid": "0c824279-bd40-5567-b60b-b6ecbdfd962b", 00:11:42.835 "is_configured": true, 00:11:42.835 "data_offset": 0, 00:11:42.835 "data_size": 65536 00:11:42.835 } 00:11:42.835 ] 00:11:42.835 }' 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.835 21:44:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.095 21:44:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.095 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.095 "name": "raid_bdev1", 00:11:43.095 "uuid": "2c6ae5db-d390-4e7c-8773-99a5ee5c6c0c", 00:11:43.095 "strip_size_kb": 0, 00:11:43.095 "state": "online", 00:11:43.095 "raid_level": "raid1", 00:11:43.095 "superblock": false, 00:11:43.095 "num_base_bdevs": 4, 00:11:43.095 "num_base_bdevs_discovered": 3, 00:11:43.095 "num_base_bdevs_operational": 3, 00:11:43.095 "base_bdevs_list": [ 00:11:43.095 { 00:11:43.095 "name": "spare", 00:11:43.095 "uuid": "7bcbbae7-e39b-5993-b5d3-c0920c282a54", 00:11:43.095 "is_configured": true, 00:11:43.095 "data_offset": 0, 00:11:43.095 "data_size": 65536 00:11:43.095 }, 00:11:43.095 { 00:11:43.095 "name": null, 00:11:43.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.095 "is_configured": false, 00:11:43.095 "data_offset": 0, 00:11:43.095 "data_size": 65536 00:11:43.095 }, 00:11:43.095 { 00:11:43.095 "name": "BaseBdev3", 00:11:43.095 "uuid": "fed75da6-1f82-55fc-a73f-a66c9b440fde", 00:11:43.095 "is_configured": true, 00:11:43.095 "data_offset": 0, 00:11:43.095 "data_size": 65536 00:11:43.095 }, 00:11:43.095 { 00:11:43.095 "name": "BaseBdev4", 00:11:43.095 "uuid": "0c824279-bd40-5567-b60b-b6ecbdfd962b", 00:11:43.095 "is_configured": true, 00:11:43.096 "data_offset": 0, 00:11:43.096 "data_size": 65536 00:11:43.096 } 00:11:43.096 ] 00:11:43.096 }' 00:11:43.096 21:44:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.096 21:44:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.355 [2024-11-27 21:44:06.361880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:43.355 [2024-11-27 21:44:06.361907] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.355 [2024-11-27 21:44:06.361999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.355 [2024-11-27 21:44:06.362086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.355 [2024-11-27 21:44:06.362100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:43.355 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:43.615 /dev/nbd0 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.615 1+0 records in 00:11:43.615 1+0 records out 00:11:43.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218008 s, 18.8 MB/s 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:43.615 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:43.875 /dev/nbd1 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:43.875 1+0 records in 00:11:43.875 1+0 records out 00:11:43.875 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407478 s, 10.1 MB/s 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:43.875 21:44:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:44.137 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:44.137 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:44.137 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:44.137 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.137 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.137 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:44.137 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:44.137 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.137 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:44.137 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 87869 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 87869 ']' 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 87869 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87869 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.396 21:44:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87869' 00:11:44.397 killing process with pid 87869 00:11:44.397 Received shutdown signal, test time was about 60.000000 seconds 00:11:44.397 00:11:44.397 Latency(us) 00:11:44.397 [2024-11-27T21:44:07.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.397 [2024-11-27T21:44:07.518Z] =================================================================================================================== 00:11:44.397 [2024-11-27T21:44:07.518Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:44.397 21:44:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 87869 00:11:44.397 [2024-11-27 21:44:07.448430] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:44.397 21:44:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 87869 00:11:44.397 [2024-11-27 21:44:07.499749] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:44.656 21:44:07 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:44.656 ************************************ 00:11:44.656 END TEST raid_rebuild_test 00:11:44.656 ************************************ 00:11:44.656 00:11:44.656 real 0m15.287s 00:11:44.656 user 0m17.500s 00:11:44.656 sys 0m2.788s 00:11:44.656 21:44:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.656 21:44:07 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.656 21:44:07 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:11:44.656 21:44:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:44.656 21:44:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.656 21:44:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:44.916 ************************************ 00:11:44.917 START TEST raid_rebuild_test_sb 00:11:44.917 ************************************ 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=88299 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 88299 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 88299 ']' 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.917 21:44:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:44.917 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:44.917 Zero copy mechanism will not be used. 00:11:44.917 [2024-11-27 21:44:07.869974] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:11:44.917 [2024-11-27 21:44:07.870118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88299 ] 00:11:44.917 [2024-11-27 21:44:08.022630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.176 [2024-11-27 21:44:08.048937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:45.176 [2024-11-27 21:44:08.091290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.176 [2024-11-27 21:44:08.091409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.746 BaseBdev1_malloc 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.746 [2024-11-27 21:44:08.722528] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:45.746 [2024-11-27 21:44:08.722636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.746 [2024-11-27 21:44:08.722684] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:11:45.746 [2024-11-27 21:44:08.722715] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.746 [2024-11-27 21:44:08.724890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.746 [2024-11-27 21:44:08.724957] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:45.746 BaseBdev1 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.746 BaseBdev2_malloc 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.746 [2024-11-27 21:44:08.750941] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:45.746 [2024-11-27 21:44:08.751028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.746 [2024-11-27 21:44:08.751068] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:45.746 [2024-11-27 21:44:08.751094] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.746 [2024-11-27 21:44:08.753156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.746 [2024-11-27 21:44:08.753229] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:45.746 BaseBdev2 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.746 BaseBdev3_malloc 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.746 [2024-11-27 21:44:08.779400] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:11:45.746 [2024-11-27 21:44:08.779454] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.746 [2024-11-27 21:44:08.779474] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:45.746 [2024-11-27 21:44:08.779483] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.746 [2024-11-27 21:44:08.781589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.746 [2024-11-27 21:44:08.781624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:45.746 BaseBdev3 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.746 BaseBdev4_malloc 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.746 [2024-11-27 21:44:08.818719] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:11:45.746 [2024-11-27 21:44:08.818767] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.746 [2024-11-27 21:44:08.818789] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:45.746 [2024-11-27 21:44:08.818810] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.746 [2024-11-27 21:44:08.820842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.746 [2024-11-27 21:44:08.820874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:45.746 BaseBdev4 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.746 spare_malloc 00:11:45.746 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.747 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:45.747 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.747 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.747 spare_delay 00:11:45.747 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.747 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:45.747 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.747 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:45.747 [2024-11-27 21:44:08.859254] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:45.747 [2024-11-27 21:44:08.859336] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.747 [2024-11-27 21:44:08.859360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:45.747 [2024-11-27 21:44:08.859368] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.747 [2024-11-27 21:44:08.861515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.747 [2024-11-27 21:44:08.861549] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:45.747 spare 00:11:45.747 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.747 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:11:45.747 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.747 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.006 [2024-11-27 21:44:08.871313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.006 [2024-11-27 21:44:08.873194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.006 [2024-11-27 21:44:08.873258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.006 [2024-11-27 21:44:08.873306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:46.006 [2024-11-27 21:44:08.873485] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:11:46.006 [2024-11-27 21:44:08.873496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.006 [2024-11-27 21:44:08.873741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:11:46.006 [2024-11-27 21:44:08.873879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:11:46.006 [2024-11-27 21:44:08.873891] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:11:46.006 [2024-11-27 21:44:08.874015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.006 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.006 "name": "raid_bdev1", 00:11:46.006 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:11:46.006 "strip_size_kb": 0, 00:11:46.006 "state": "online", 00:11:46.006 "raid_level": "raid1", 00:11:46.006 "superblock": true, 00:11:46.006 "num_base_bdevs": 4, 00:11:46.006 "num_base_bdevs_discovered": 4, 00:11:46.006 "num_base_bdevs_operational": 4, 00:11:46.007 "base_bdevs_list": [ 00:11:46.007 { 00:11:46.007 "name": "BaseBdev1", 00:11:46.007 "uuid": "949480fc-fc3d-549b-b14c-6bb55e988366", 00:11:46.007 "is_configured": true, 00:11:46.007 "data_offset": 2048, 00:11:46.007 "data_size": 63488 00:11:46.007 }, 00:11:46.007 { 00:11:46.007 "name": "BaseBdev2", 00:11:46.007 "uuid": "b664d247-5f41-55b9-bab5-68e52a625cab", 00:11:46.007 "is_configured": true, 00:11:46.007 "data_offset": 2048, 00:11:46.007 "data_size": 63488 00:11:46.007 }, 00:11:46.007 { 00:11:46.007 "name": "BaseBdev3", 00:11:46.007 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:11:46.007 "is_configured": true, 00:11:46.007 "data_offset": 2048, 00:11:46.007 "data_size": 63488 00:11:46.007 }, 00:11:46.007 { 00:11:46.007 "name": "BaseBdev4", 00:11:46.007 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:11:46.007 "is_configured": true, 00:11:46.007 "data_offset": 2048, 00:11:46.007 "data_size": 63488 00:11:46.007 } 00:11:46.007 ] 00:11:46.007 }' 00:11:46.007 21:44:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.007 21:44:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.280 [2024-11-27 21:44:09.302891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:46.280 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:46.541 [2024-11-27 21:44:09.566169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:11:46.541 /dev/nbd0 00:11:46.541 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:46.541 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:46.541 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:46.541 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:46.542 1+0 records in 00:11:46.542 1+0 records out 00:11:46.542 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225647 s, 18.2 MB/s 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:46.542 21:44:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:11:51.831 63488+0 records in 00:11:51.831 63488+0 records out 00:11:51.831 32505856 bytes (33 MB, 31 MiB) copied, 5.11298 s, 6.4 MB/s 00:11:51.831 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:51.831 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:51.831 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:51.831 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:51.831 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:51.831 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:51.831 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:52.091 [2024-11-27 21:44:14.958301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.091 [2024-11-27 21:44:14.973532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.091 21:44:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.091 21:44:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.091 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.091 "name": "raid_bdev1", 00:11:52.091 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:11:52.091 "strip_size_kb": 0, 00:11:52.091 "state": "online", 00:11:52.091 "raid_level": "raid1", 00:11:52.091 "superblock": true, 00:11:52.091 "num_base_bdevs": 4, 00:11:52.091 "num_base_bdevs_discovered": 3, 00:11:52.091 "num_base_bdevs_operational": 3, 00:11:52.091 "base_bdevs_list": [ 00:11:52.091 { 00:11:52.091 "name": null, 00:11:52.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.091 "is_configured": false, 00:11:52.091 "data_offset": 0, 00:11:52.091 "data_size": 63488 00:11:52.091 }, 00:11:52.091 { 00:11:52.091 "name": "BaseBdev2", 00:11:52.091 "uuid": "b664d247-5f41-55b9-bab5-68e52a625cab", 00:11:52.091 "is_configured": true, 00:11:52.091 "data_offset": 2048, 00:11:52.091 "data_size": 63488 00:11:52.091 }, 00:11:52.091 { 00:11:52.091 "name": "BaseBdev3", 00:11:52.091 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:11:52.091 "is_configured": true, 00:11:52.091 "data_offset": 2048, 00:11:52.091 "data_size": 63488 00:11:52.091 }, 00:11:52.091 { 00:11:52.091 "name": "BaseBdev4", 00:11:52.091 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:11:52.091 "is_configured": true, 00:11:52.091 "data_offset": 2048, 00:11:52.091 "data_size": 63488 00:11:52.091 } 00:11:52.091 ] 00:11:52.091 }' 00:11:52.091 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.092 21:44:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.351 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:52.351 21:44:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.351 21:44:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.351 [2024-11-27 21:44:15.388888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:52.351 [2024-11-27 21:44:15.393130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e420 00:11:52.351 21:44:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.351 21:44:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:52.351 [2024-11-27 21:44:15.395141] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:53.291 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:53.291 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:53.291 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:53.291 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:53.291 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:53.291 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.291 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.291 21:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.291 21:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:53.552 "name": "raid_bdev1", 00:11:53.552 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:11:53.552 "strip_size_kb": 0, 00:11:53.552 "state": "online", 00:11:53.552 "raid_level": "raid1", 00:11:53.552 "superblock": true, 00:11:53.552 "num_base_bdevs": 4, 00:11:53.552 "num_base_bdevs_discovered": 4, 00:11:53.552 "num_base_bdevs_operational": 4, 00:11:53.552 "process": { 00:11:53.552 "type": "rebuild", 00:11:53.552 "target": "spare", 00:11:53.552 "progress": { 00:11:53.552 "blocks": 20480, 00:11:53.552 "percent": 32 00:11:53.552 } 00:11:53.552 }, 00:11:53.552 "base_bdevs_list": [ 00:11:53.552 { 00:11:53.552 "name": "spare", 00:11:53.552 "uuid": "8185e3bf-51cc-5d9c-8eaa-2fbbb4bd80b7", 00:11:53.552 "is_configured": true, 00:11:53.552 "data_offset": 2048, 00:11:53.552 "data_size": 63488 00:11:53.552 }, 00:11:53.552 { 00:11:53.552 "name": "BaseBdev2", 00:11:53.552 "uuid": "b664d247-5f41-55b9-bab5-68e52a625cab", 00:11:53.552 "is_configured": true, 00:11:53.552 "data_offset": 2048, 00:11:53.552 "data_size": 63488 00:11:53.552 }, 00:11:53.552 { 00:11:53.552 "name": "BaseBdev3", 00:11:53.552 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:11:53.552 "is_configured": true, 00:11:53.552 "data_offset": 2048, 00:11:53.552 "data_size": 63488 00:11:53.552 }, 00:11:53.552 { 00:11:53.552 "name": "BaseBdev4", 00:11:53.552 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:11:53.552 "is_configured": true, 00:11:53.552 "data_offset": 2048, 00:11:53.552 "data_size": 63488 00:11:53.552 } 00:11:53.552 ] 00:11:53.552 }' 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.552 [2024-11-27 21:44:16.560408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.552 [2024-11-27 21:44:16.600337] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:53.552 [2024-11-27 21:44:16.600400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.552 [2024-11-27 21:44:16.600421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:53.552 [2024-11-27 21:44:16.600428] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.552 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.552 "name": "raid_bdev1", 00:11:53.552 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:11:53.552 "strip_size_kb": 0, 00:11:53.552 "state": "online", 00:11:53.552 "raid_level": "raid1", 00:11:53.552 "superblock": true, 00:11:53.552 "num_base_bdevs": 4, 00:11:53.552 "num_base_bdevs_discovered": 3, 00:11:53.552 "num_base_bdevs_operational": 3, 00:11:53.552 "base_bdevs_list": [ 00:11:53.552 { 00:11:53.552 "name": null, 00:11:53.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.552 "is_configured": false, 00:11:53.552 "data_offset": 0, 00:11:53.552 "data_size": 63488 00:11:53.552 }, 00:11:53.552 { 00:11:53.552 "name": "BaseBdev2", 00:11:53.552 "uuid": "b664d247-5f41-55b9-bab5-68e52a625cab", 00:11:53.552 "is_configured": true, 00:11:53.552 "data_offset": 2048, 00:11:53.552 "data_size": 63488 00:11:53.552 }, 00:11:53.552 { 00:11:53.552 "name": "BaseBdev3", 00:11:53.553 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:11:53.553 "is_configured": true, 00:11:53.553 "data_offset": 2048, 00:11:53.553 "data_size": 63488 00:11:53.553 }, 00:11:53.553 { 00:11:53.553 "name": "BaseBdev4", 00:11:53.553 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:11:53.553 "is_configured": true, 00:11:53.553 "data_offset": 2048, 00:11:53.553 "data_size": 63488 00:11:53.553 } 00:11:53.553 ] 00:11:53.553 }' 00:11:53.553 21:44:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.553 21:44:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:54.122 "name": "raid_bdev1", 00:11:54.122 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:11:54.122 "strip_size_kb": 0, 00:11:54.122 "state": "online", 00:11:54.122 "raid_level": "raid1", 00:11:54.122 "superblock": true, 00:11:54.122 "num_base_bdevs": 4, 00:11:54.122 "num_base_bdevs_discovered": 3, 00:11:54.122 "num_base_bdevs_operational": 3, 00:11:54.122 "base_bdevs_list": [ 00:11:54.122 { 00:11:54.122 "name": null, 00:11:54.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.122 "is_configured": false, 00:11:54.122 "data_offset": 0, 00:11:54.122 "data_size": 63488 00:11:54.122 }, 00:11:54.122 { 00:11:54.122 "name": "BaseBdev2", 00:11:54.122 "uuid": "b664d247-5f41-55b9-bab5-68e52a625cab", 00:11:54.122 "is_configured": true, 00:11:54.122 "data_offset": 2048, 00:11:54.122 "data_size": 63488 00:11:54.122 }, 00:11:54.122 { 00:11:54.122 "name": "BaseBdev3", 00:11:54.122 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:11:54.122 "is_configured": true, 00:11:54.122 "data_offset": 2048, 00:11:54.122 "data_size": 63488 00:11:54.122 }, 00:11:54.122 { 00:11:54.122 "name": "BaseBdev4", 00:11:54.122 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:11:54.122 "is_configured": true, 00:11:54.122 "data_offset": 2048, 00:11:54.122 "data_size": 63488 00:11:54.122 } 00:11:54.122 ] 00:11:54.122 }' 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.122 [2024-11-27 21:44:17.208090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:54.122 [2024-11-27 21:44:17.212166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000c3e4f0 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.122 21:44:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:54.122 [2024-11-27 21:44:17.214124] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:55.503 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:55.503 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.503 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:55.503 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:55.503 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.503 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.503 21:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.503 21:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.503 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.503 21:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.503 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.503 "name": "raid_bdev1", 00:11:55.503 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:11:55.503 "strip_size_kb": 0, 00:11:55.503 "state": "online", 00:11:55.503 "raid_level": "raid1", 00:11:55.503 "superblock": true, 00:11:55.503 "num_base_bdevs": 4, 00:11:55.503 "num_base_bdevs_discovered": 4, 00:11:55.503 "num_base_bdevs_operational": 4, 00:11:55.503 "process": { 00:11:55.503 "type": "rebuild", 00:11:55.503 "target": "spare", 00:11:55.503 "progress": { 00:11:55.503 "blocks": 20480, 00:11:55.503 "percent": 32 00:11:55.503 } 00:11:55.503 }, 00:11:55.503 "base_bdevs_list": [ 00:11:55.503 { 00:11:55.503 "name": "spare", 00:11:55.503 "uuid": "8185e3bf-51cc-5d9c-8eaa-2fbbb4bd80b7", 00:11:55.504 "is_configured": true, 00:11:55.504 "data_offset": 2048, 00:11:55.504 "data_size": 63488 00:11:55.504 }, 00:11:55.504 { 00:11:55.504 "name": "BaseBdev2", 00:11:55.504 "uuid": "b664d247-5f41-55b9-bab5-68e52a625cab", 00:11:55.504 "is_configured": true, 00:11:55.504 "data_offset": 2048, 00:11:55.504 "data_size": 63488 00:11:55.504 }, 00:11:55.504 { 00:11:55.504 "name": "BaseBdev3", 00:11:55.504 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:11:55.504 "is_configured": true, 00:11:55.504 "data_offset": 2048, 00:11:55.504 "data_size": 63488 00:11:55.504 }, 00:11:55.504 { 00:11:55.504 "name": "BaseBdev4", 00:11:55.504 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:11:55.504 "is_configured": true, 00:11:55.504 "data_offset": 2048, 00:11:55.504 "data_size": 63488 00:11:55.504 } 00:11:55.504 ] 00:11:55.504 }' 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:11:55.504 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.504 [2024-11-27 21:44:18.358964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:55.504 [2024-11-27 21:44:18.518015] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000c3e4f0 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.504 "name": "raid_bdev1", 00:11:55.504 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:11:55.504 "strip_size_kb": 0, 00:11:55.504 "state": "online", 00:11:55.504 "raid_level": "raid1", 00:11:55.504 "superblock": true, 00:11:55.504 "num_base_bdevs": 4, 00:11:55.504 "num_base_bdevs_discovered": 3, 00:11:55.504 "num_base_bdevs_operational": 3, 00:11:55.504 "process": { 00:11:55.504 "type": "rebuild", 00:11:55.504 "target": "spare", 00:11:55.504 "progress": { 00:11:55.504 "blocks": 24576, 00:11:55.504 "percent": 38 00:11:55.504 } 00:11:55.504 }, 00:11:55.504 "base_bdevs_list": [ 00:11:55.504 { 00:11:55.504 "name": "spare", 00:11:55.504 "uuid": "8185e3bf-51cc-5d9c-8eaa-2fbbb4bd80b7", 00:11:55.504 "is_configured": true, 00:11:55.504 "data_offset": 2048, 00:11:55.504 "data_size": 63488 00:11:55.504 }, 00:11:55.504 { 00:11:55.504 "name": null, 00:11:55.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.504 "is_configured": false, 00:11:55.504 "data_offset": 0, 00:11:55.504 "data_size": 63488 00:11:55.504 }, 00:11:55.504 { 00:11:55.504 "name": "BaseBdev3", 00:11:55.504 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:11:55.504 "is_configured": true, 00:11:55.504 "data_offset": 2048, 00:11:55.504 "data_size": 63488 00:11:55.504 }, 00:11:55.504 { 00:11:55.504 "name": "BaseBdev4", 00:11:55.504 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:11:55.504 "is_configured": true, 00:11:55.504 "data_offset": 2048, 00:11:55.504 "data_size": 63488 00:11:55.504 } 00:11:55.504 ] 00:11:55.504 }' 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:55.504 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.764 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.764 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=367 00:11:55.764 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:55.765 "name": "raid_bdev1", 00:11:55.765 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:11:55.765 "strip_size_kb": 0, 00:11:55.765 "state": "online", 00:11:55.765 "raid_level": "raid1", 00:11:55.765 "superblock": true, 00:11:55.765 "num_base_bdevs": 4, 00:11:55.765 "num_base_bdevs_discovered": 3, 00:11:55.765 "num_base_bdevs_operational": 3, 00:11:55.765 "process": { 00:11:55.765 "type": "rebuild", 00:11:55.765 "target": "spare", 00:11:55.765 "progress": { 00:11:55.765 "blocks": 26624, 00:11:55.765 "percent": 41 00:11:55.765 } 00:11:55.765 }, 00:11:55.765 "base_bdevs_list": [ 00:11:55.765 { 00:11:55.765 "name": "spare", 00:11:55.765 "uuid": "8185e3bf-51cc-5d9c-8eaa-2fbbb4bd80b7", 00:11:55.765 "is_configured": true, 00:11:55.765 "data_offset": 2048, 00:11:55.765 "data_size": 63488 00:11:55.765 }, 00:11:55.765 { 00:11:55.765 "name": null, 00:11:55.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.765 "is_configured": false, 00:11:55.765 "data_offset": 0, 00:11:55.765 "data_size": 63488 00:11:55.765 }, 00:11:55.765 { 00:11:55.765 "name": "BaseBdev3", 00:11:55.765 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:11:55.765 "is_configured": true, 00:11:55.765 "data_offset": 2048, 00:11:55.765 "data_size": 63488 00:11:55.765 }, 00:11:55.765 { 00:11:55.765 "name": "BaseBdev4", 00:11:55.765 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:11:55.765 "is_configured": true, 00:11:55.765 "data_offset": 2048, 00:11:55.765 "data_size": 63488 00:11:55.765 } 00:11:55.765 ] 00:11:55.765 }' 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:55.765 21:44:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:56.700 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:56.700 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:56.700 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:56.700 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:56.700 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:56.700 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:56.700 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.700 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.700 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.700 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.700 21:44:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.960 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:56.960 "name": "raid_bdev1", 00:11:56.960 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:11:56.960 "strip_size_kb": 0, 00:11:56.960 "state": "online", 00:11:56.960 "raid_level": "raid1", 00:11:56.960 "superblock": true, 00:11:56.960 "num_base_bdevs": 4, 00:11:56.960 "num_base_bdevs_discovered": 3, 00:11:56.960 "num_base_bdevs_operational": 3, 00:11:56.960 "process": { 00:11:56.960 "type": "rebuild", 00:11:56.960 "target": "spare", 00:11:56.960 "progress": { 00:11:56.960 "blocks": 49152, 00:11:56.960 "percent": 77 00:11:56.960 } 00:11:56.960 }, 00:11:56.960 "base_bdevs_list": [ 00:11:56.960 { 00:11:56.960 "name": "spare", 00:11:56.960 "uuid": "8185e3bf-51cc-5d9c-8eaa-2fbbb4bd80b7", 00:11:56.960 "is_configured": true, 00:11:56.960 "data_offset": 2048, 00:11:56.960 "data_size": 63488 00:11:56.960 }, 00:11:56.960 { 00:11:56.960 "name": null, 00:11:56.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.960 "is_configured": false, 00:11:56.960 "data_offset": 0, 00:11:56.960 "data_size": 63488 00:11:56.960 }, 00:11:56.960 { 00:11:56.960 "name": "BaseBdev3", 00:11:56.960 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:11:56.960 "is_configured": true, 00:11:56.960 "data_offset": 2048, 00:11:56.960 "data_size": 63488 00:11:56.960 }, 00:11:56.960 { 00:11:56.960 "name": "BaseBdev4", 00:11:56.960 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:11:56.960 "is_configured": true, 00:11:56.960 "data_offset": 2048, 00:11:56.960 "data_size": 63488 00:11:56.960 } 00:11:56.960 ] 00:11:56.960 }' 00:11:56.960 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:56.960 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:56.960 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:56.960 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:56.960 21:44:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:57.529 [2024-11-27 21:44:20.424533] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:57.529 [2024-11-27 21:44:20.424603] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:57.529 [2024-11-27 21:44:20.424704] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.099 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:58.099 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.099 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.099 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.099 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.099 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.099 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.099 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.099 21:44:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.099 21:44:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.099 21:44:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.099 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.099 "name": "raid_bdev1", 00:11:58.099 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:11:58.100 "strip_size_kb": 0, 00:11:58.100 "state": "online", 00:11:58.100 "raid_level": "raid1", 00:11:58.100 "superblock": true, 00:11:58.100 "num_base_bdevs": 4, 00:11:58.100 "num_base_bdevs_discovered": 3, 00:11:58.100 "num_base_bdevs_operational": 3, 00:11:58.100 "base_bdevs_list": [ 00:11:58.100 { 00:11:58.100 "name": "spare", 00:11:58.100 "uuid": "8185e3bf-51cc-5d9c-8eaa-2fbbb4bd80b7", 00:11:58.100 "is_configured": true, 00:11:58.100 "data_offset": 2048, 00:11:58.100 "data_size": 63488 00:11:58.100 }, 00:11:58.100 { 00:11:58.100 "name": null, 00:11:58.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.100 "is_configured": false, 00:11:58.100 "data_offset": 0, 00:11:58.100 "data_size": 63488 00:11:58.100 }, 00:11:58.100 { 00:11:58.100 "name": "BaseBdev3", 00:11:58.100 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:11:58.100 "is_configured": true, 00:11:58.100 "data_offset": 2048, 00:11:58.100 "data_size": 63488 00:11:58.100 }, 00:11:58.100 { 00:11:58.100 "name": "BaseBdev4", 00:11:58.100 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:11:58.100 "is_configured": true, 00:11:58.100 "data_offset": 2048, 00:11:58.100 "data_size": 63488 00:11:58.100 } 00:11:58.100 ] 00:11:58.100 }' 00:11:58.100 21:44:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.100 "name": "raid_bdev1", 00:11:58.100 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:11:58.100 "strip_size_kb": 0, 00:11:58.100 "state": "online", 00:11:58.100 "raid_level": "raid1", 00:11:58.100 "superblock": true, 00:11:58.100 "num_base_bdevs": 4, 00:11:58.100 "num_base_bdevs_discovered": 3, 00:11:58.100 "num_base_bdevs_operational": 3, 00:11:58.100 "base_bdevs_list": [ 00:11:58.100 { 00:11:58.100 "name": "spare", 00:11:58.100 "uuid": "8185e3bf-51cc-5d9c-8eaa-2fbbb4bd80b7", 00:11:58.100 "is_configured": true, 00:11:58.100 "data_offset": 2048, 00:11:58.100 "data_size": 63488 00:11:58.100 }, 00:11:58.100 { 00:11:58.100 "name": null, 00:11:58.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.100 "is_configured": false, 00:11:58.100 "data_offset": 0, 00:11:58.100 "data_size": 63488 00:11:58.100 }, 00:11:58.100 { 00:11:58.100 "name": "BaseBdev3", 00:11:58.100 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:11:58.100 "is_configured": true, 00:11:58.100 "data_offset": 2048, 00:11:58.100 "data_size": 63488 00:11:58.100 }, 00:11:58.100 { 00:11:58.100 "name": "BaseBdev4", 00:11:58.100 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:11:58.100 "is_configured": true, 00:11:58.100 "data_offset": 2048, 00:11:58.100 "data_size": 63488 00:11:58.100 } 00:11:58.100 ] 00:11:58.100 }' 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.100 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.360 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.360 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.360 "name": "raid_bdev1", 00:11:58.360 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:11:58.360 "strip_size_kb": 0, 00:11:58.360 "state": "online", 00:11:58.360 "raid_level": "raid1", 00:11:58.360 "superblock": true, 00:11:58.360 "num_base_bdevs": 4, 00:11:58.360 "num_base_bdevs_discovered": 3, 00:11:58.360 "num_base_bdevs_operational": 3, 00:11:58.360 "base_bdevs_list": [ 00:11:58.360 { 00:11:58.360 "name": "spare", 00:11:58.360 "uuid": "8185e3bf-51cc-5d9c-8eaa-2fbbb4bd80b7", 00:11:58.360 "is_configured": true, 00:11:58.360 "data_offset": 2048, 00:11:58.360 "data_size": 63488 00:11:58.360 }, 00:11:58.360 { 00:11:58.360 "name": null, 00:11:58.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.360 "is_configured": false, 00:11:58.360 "data_offset": 0, 00:11:58.360 "data_size": 63488 00:11:58.360 }, 00:11:58.360 { 00:11:58.360 "name": "BaseBdev3", 00:11:58.360 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:11:58.360 "is_configured": true, 00:11:58.360 "data_offset": 2048, 00:11:58.360 "data_size": 63488 00:11:58.360 }, 00:11:58.360 { 00:11:58.360 "name": "BaseBdev4", 00:11:58.360 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:11:58.360 "is_configured": true, 00:11:58.360 "data_offset": 2048, 00:11:58.360 "data_size": 63488 00:11:58.360 } 00:11:58.360 ] 00:11:58.360 }' 00:11:58.360 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.360 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.620 [2024-11-27 21:44:21.623003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:58.620 [2024-11-27 21:44:21.623032] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.620 [2024-11-27 21:44:21.623131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.620 [2024-11-27 21:44:21.623219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.620 [2024-11-27 21:44:21.623233] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:58.620 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:58.880 /dev/nbd0 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:58.880 1+0 records in 00:11:58.880 1+0 records out 00:11:58.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00051549 s, 7.9 MB/s 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:58.880 21:44:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:59.139 /dev/nbd1 00:11:59.139 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:59.139 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:59.139 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:59.140 1+0 records in 00:11:59.140 1+0 records out 00:11:59.140 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478136 s, 8.6 MB/s 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.140 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:59.400 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:59.400 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:59.400 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:59.400 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.400 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.400 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:59.400 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:59.400 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.400 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:59.400 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:59.659 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:59.659 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.660 [2024-11-27 21:44:22.671870] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:59.660 [2024-11-27 21:44:22.671930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:59.660 [2024-11-27 21:44:22.671951] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:59.660 [2024-11-27 21:44:22.671965] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:59.660 [2024-11-27 21:44:22.674205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:59.660 [2024-11-27 21:44:22.674291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:59.660 [2024-11-27 21:44:22.674389] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:11:59.660 [2024-11-27 21:44:22.674435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:59.660 [2024-11-27 21:44:22.674548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.660 [2024-11-27 21:44:22.674655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:59.660 spare 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.660 [2024-11-27 21:44:22.774537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:11:59.660 [2024-11-27 21:44:22.774567] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:59.660 [2024-11-27 21:44:22.774863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeb00 00:11:59.660 [2024-11-27 21:44:22.775034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:11:59.660 [2024-11-27 21:44:22.775044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:11:59.660 [2024-11-27 21:44:22.775179] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.660 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.920 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.920 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.920 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.920 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.920 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.920 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.920 "name": "raid_bdev1", 00:11:59.920 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:11:59.920 "strip_size_kb": 0, 00:11:59.920 "state": "online", 00:11:59.920 "raid_level": "raid1", 00:11:59.920 "superblock": true, 00:11:59.920 "num_base_bdevs": 4, 00:11:59.920 "num_base_bdevs_discovered": 3, 00:11:59.920 "num_base_bdevs_operational": 3, 00:11:59.920 "base_bdevs_list": [ 00:11:59.920 { 00:11:59.920 "name": "spare", 00:11:59.920 "uuid": "8185e3bf-51cc-5d9c-8eaa-2fbbb4bd80b7", 00:11:59.920 "is_configured": true, 00:11:59.920 "data_offset": 2048, 00:11:59.920 "data_size": 63488 00:11:59.920 }, 00:11:59.920 { 00:11:59.920 "name": null, 00:11:59.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.920 "is_configured": false, 00:11:59.920 "data_offset": 2048, 00:11:59.920 "data_size": 63488 00:11:59.920 }, 00:11:59.920 { 00:11:59.920 "name": "BaseBdev3", 00:11:59.920 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:11:59.920 "is_configured": true, 00:11:59.920 "data_offset": 2048, 00:11:59.920 "data_size": 63488 00:11:59.920 }, 00:11:59.920 { 00:11:59.920 "name": "BaseBdev4", 00:11:59.920 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:11:59.920 "is_configured": true, 00:11:59.920 "data_offset": 2048, 00:11:59.920 "data_size": 63488 00:11:59.920 } 00:11:59.920 ] 00:11:59.920 }' 00:11:59.920 21:44:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.920 21:44:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.180 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:00.180 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.180 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:00.180 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:00.180 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.180 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.180 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.180 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.180 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.180 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.180 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.180 "name": "raid_bdev1", 00:12:00.180 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:12:00.180 "strip_size_kb": 0, 00:12:00.180 "state": "online", 00:12:00.180 "raid_level": "raid1", 00:12:00.180 "superblock": true, 00:12:00.180 "num_base_bdevs": 4, 00:12:00.180 "num_base_bdevs_discovered": 3, 00:12:00.180 "num_base_bdevs_operational": 3, 00:12:00.180 "base_bdevs_list": [ 00:12:00.180 { 00:12:00.180 "name": "spare", 00:12:00.180 "uuid": "8185e3bf-51cc-5d9c-8eaa-2fbbb4bd80b7", 00:12:00.180 "is_configured": true, 00:12:00.180 "data_offset": 2048, 00:12:00.180 "data_size": 63488 00:12:00.180 }, 00:12:00.180 { 00:12:00.180 "name": null, 00:12:00.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.180 "is_configured": false, 00:12:00.180 "data_offset": 2048, 00:12:00.180 "data_size": 63488 00:12:00.180 }, 00:12:00.180 { 00:12:00.180 "name": "BaseBdev3", 00:12:00.180 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:12:00.180 "is_configured": true, 00:12:00.180 "data_offset": 2048, 00:12:00.180 "data_size": 63488 00:12:00.180 }, 00:12:00.180 { 00:12:00.180 "name": "BaseBdev4", 00:12:00.180 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:12:00.180 "is_configured": true, 00:12:00.180 "data_offset": 2048, 00:12:00.180 "data_size": 63488 00:12:00.180 } 00:12:00.180 ] 00:12:00.180 }' 00:12:00.180 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.440 [2024-11-27 21:44:23.410616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.440 "name": "raid_bdev1", 00:12:00.440 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:12:00.440 "strip_size_kb": 0, 00:12:00.440 "state": "online", 00:12:00.440 "raid_level": "raid1", 00:12:00.440 "superblock": true, 00:12:00.440 "num_base_bdevs": 4, 00:12:00.440 "num_base_bdevs_discovered": 2, 00:12:00.440 "num_base_bdevs_operational": 2, 00:12:00.440 "base_bdevs_list": [ 00:12:00.440 { 00:12:00.440 "name": null, 00:12:00.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.440 "is_configured": false, 00:12:00.440 "data_offset": 0, 00:12:00.440 "data_size": 63488 00:12:00.440 }, 00:12:00.440 { 00:12:00.440 "name": null, 00:12:00.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.440 "is_configured": false, 00:12:00.440 "data_offset": 2048, 00:12:00.440 "data_size": 63488 00:12:00.440 }, 00:12:00.440 { 00:12:00.440 "name": "BaseBdev3", 00:12:00.440 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:12:00.440 "is_configured": true, 00:12:00.440 "data_offset": 2048, 00:12:00.440 "data_size": 63488 00:12:00.440 }, 00:12:00.440 { 00:12:00.440 "name": "BaseBdev4", 00:12:00.440 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:12:00.440 "is_configured": true, 00:12:00.440 "data_offset": 2048, 00:12:00.440 "data_size": 63488 00:12:00.440 } 00:12:00.440 ] 00:12:00.440 }' 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.440 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.009 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:01.009 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.009 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.009 [2024-11-27 21:44:23.849898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:01.009 [2024-11-27 21:44:23.850134] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:01.009 [2024-11-27 21:44:23.850199] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:01.009 [2024-11-27 21:44:23.850282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:01.009 [2024-11-27 21:44:23.854242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caebd0 00:12:01.009 21:44:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.009 21:44:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:01.009 [2024-11-27 21:44:23.856164] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:01.949 21:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.949 21:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.949 21:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.949 21:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.949 21:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.949 21:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.950 21:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.950 21:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.950 21:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.950 21:44:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.950 21:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.950 "name": "raid_bdev1", 00:12:01.950 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:12:01.950 "strip_size_kb": 0, 00:12:01.950 "state": "online", 00:12:01.950 "raid_level": "raid1", 00:12:01.950 "superblock": true, 00:12:01.950 "num_base_bdevs": 4, 00:12:01.950 "num_base_bdevs_discovered": 3, 00:12:01.950 "num_base_bdevs_operational": 3, 00:12:01.950 "process": { 00:12:01.950 "type": "rebuild", 00:12:01.950 "target": "spare", 00:12:01.950 "progress": { 00:12:01.950 "blocks": 20480, 00:12:01.950 "percent": 32 00:12:01.950 } 00:12:01.950 }, 00:12:01.950 "base_bdevs_list": [ 00:12:01.950 { 00:12:01.950 "name": "spare", 00:12:01.950 "uuid": "8185e3bf-51cc-5d9c-8eaa-2fbbb4bd80b7", 00:12:01.950 "is_configured": true, 00:12:01.950 "data_offset": 2048, 00:12:01.950 "data_size": 63488 00:12:01.950 }, 00:12:01.950 { 00:12:01.950 "name": null, 00:12:01.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.950 "is_configured": false, 00:12:01.950 "data_offset": 2048, 00:12:01.950 "data_size": 63488 00:12:01.950 }, 00:12:01.950 { 00:12:01.950 "name": "BaseBdev3", 00:12:01.950 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:12:01.950 "is_configured": true, 00:12:01.950 "data_offset": 2048, 00:12:01.950 "data_size": 63488 00:12:01.950 }, 00:12:01.950 { 00:12:01.950 "name": "BaseBdev4", 00:12:01.950 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:12:01.950 "is_configured": true, 00:12:01.950 "data_offset": 2048, 00:12:01.950 "data_size": 63488 00:12:01.950 } 00:12:01.950 ] 00:12:01.950 }' 00:12:01.950 21:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.950 21:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.950 21:44:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.950 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.950 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:01.950 21:44:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.950 21:44:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.950 [2024-11-27 21:44:25.013048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.950 [2024-11-27 21:44:25.060138] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:01.950 [2024-11-27 21:44:25.060234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.950 [2024-11-27 21:44:25.060252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:01.950 [2024-11-27 21:44:25.060261] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:02.209 21:44:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.209 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:02.209 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.209 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.209 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.209 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.209 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.209 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.210 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.210 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.210 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.210 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.210 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.210 21:44:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.210 21:44:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.210 21:44:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.210 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.210 "name": "raid_bdev1", 00:12:02.210 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:12:02.210 "strip_size_kb": 0, 00:12:02.210 "state": "online", 00:12:02.210 "raid_level": "raid1", 00:12:02.210 "superblock": true, 00:12:02.210 "num_base_bdevs": 4, 00:12:02.210 "num_base_bdevs_discovered": 2, 00:12:02.210 "num_base_bdevs_operational": 2, 00:12:02.210 "base_bdevs_list": [ 00:12:02.210 { 00:12:02.210 "name": null, 00:12:02.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.210 "is_configured": false, 00:12:02.210 "data_offset": 0, 00:12:02.210 "data_size": 63488 00:12:02.210 }, 00:12:02.210 { 00:12:02.210 "name": null, 00:12:02.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.210 "is_configured": false, 00:12:02.210 "data_offset": 2048, 00:12:02.210 "data_size": 63488 00:12:02.210 }, 00:12:02.210 { 00:12:02.210 "name": "BaseBdev3", 00:12:02.210 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:12:02.210 "is_configured": true, 00:12:02.210 "data_offset": 2048, 00:12:02.210 "data_size": 63488 00:12:02.210 }, 00:12:02.210 { 00:12:02.210 "name": "BaseBdev4", 00:12:02.210 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:12:02.210 "is_configured": true, 00:12:02.210 "data_offset": 2048, 00:12:02.210 "data_size": 63488 00:12:02.210 } 00:12:02.210 ] 00:12:02.210 }' 00:12:02.210 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.210 21:44:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.469 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:02.469 21:44:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.469 21:44:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.469 [2024-11-27 21:44:25.543686] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:02.469 [2024-11-27 21:44:25.543818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:02.469 [2024-11-27 21:44:25.543857] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:12:02.469 [2024-11-27 21:44:25.543888] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:02.469 [2024-11-27 21:44:25.544409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:02.469 [2024-11-27 21:44:25.544472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:02.469 [2024-11-27 21:44:25.544609] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:02.469 [2024-11-27 21:44:25.544658] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:02.469 [2024-11-27 21:44:25.544708] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:02.469 [2024-11-27 21:44:25.544789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:02.469 [2024-11-27 21:44:25.548739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000caeca0 00:12:02.469 spare 00:12:02.469 21:44:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.469 21:44:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:02.469 [2024-11-27 21:44:25.550646] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.849 "name": "raid_bdev1", 00:12:03.849 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:12:03.849 "strip_size_kb": 0, 00:12:03.849 "state": "online", 00:12:03.849 "raid_level": "raid1", 00:12:03.849 "superblock": true, 00:12:03.849 "num_base_bdevs": 4, 00:12:03.849 "num_base_bdevs_discovered": 3, 00:12:03.849 "num_base_bdevs_operational": 3, 00:12:03.849 "process": { 00:12:03.849 "type": "rebuild", 00:12:03.849 "target": "spare", 00:12:03.849 "progress": { 00:12:03.849 "blocks": 20480, 00:12:03.849 "percent": 32 00:12:03.849 } 00:12:03.849 }, 00:12:03.849 "base_bdevs_list": [ 00:12:03.849 { 00:12:03.849 "name": "spare", 00:12:03.849 "uuid": "8185e3bf-51cc-5d9c-8eaa-2fbbb4bd80b7", 00:12:03.849 "is_configured": true, 00:12:03.849 "data_offset": 2048, 00:12:03.849 "data_size": 63488 00:12:03.849 }, 00:12:03.849 { 00:12:03.849 "name": null, 00:12:03.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.849 "is_configured": false, 00:12:03.849 "data_offset": 2048, 00:12:03.849 "data_size": 63488 00:12:03.849 }, 00:12:03.849 { 00:12:03.849 "name": "BaseBdev3", 00:12:03.849 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:12:03.849 "is_configured": true, 00:12:03.849 "data_offset": 2048, 00:12:03.849 "data_size": 63488 00:12:03.849 }, 00:12:03.849 { 00:12:03.849 "name": "BaseBdev4", 00:12:03.849 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:12:03.849 "is_configured": true, 00:12:03.849 "data_offset": 2048, 00:12:03.849 "data_size": 63488 00:12:03.849 } 00:12:03.849 ] 00:12:03.849 }' 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.849 [2024-11-27 21:44:26.699092] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:03.849 [2024-11-27 21:44:26.754942] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:03.849 [2024-11-27 21:44:26.754988] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.849 [2024-11-27 21:44:26.755005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:03.849 [2024-11-27 21:44:26.755012] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.849 "name": "raid_bdev1", 00:12:03.849 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:12:03.849 "strip_size_kb": 0, 00:12:03.849 "state": "online", 00:12:03.849 "raid_level": "raid1", 00:12:03.849 "superblock": true, 00:12:03.849 "num_base_bdevs": 4, 00:12:03.849 "num_base_bdevs_discovered": 2, 00:12:03.849 "num_base_bdevs_operational": 2, 00:12:03.849 "base_bdevs_list": [ 00:12:03.849 { 00:12:03.849 "name": null, 00:12:03.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.849 "is_configured": false, 00:12:03.849 "data_offset": 0, 00:12:03.849 "data_size": 63488 00:12:03.849 }, 00:12:03.849 { 00:12:03.849 "name": null, 00:12:03.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.849 "is_configured": false, 00:12:03.849 "data_offset": 2048, 00:12:03.849 "data_size": 63488 00:12:03.849 }, 00:12:03.849 { 00:12:03.849 "name": "BaseBdev3", 00:12:03.849 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:12:03.849 "is_configured": true, 00:12:03.849 "data_offset": 2048, 00:12:03.849 "data_size": 63488 00:12:03.849 }, 00:12:03.849 { 00:12:03.849 "name": "BaseBdev4", 00:12:03.849 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:12:03.849 "is_configured": true, 00:12:03.849 "data_offset": 2048, 00:12:03.849 "data_size": 63488 00:12:03.849 } 00:12:03.849 ] 00:12:03.849 }' 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.849 21:44:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.108 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:04.108 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.108 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:04.108 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:04.108 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.108 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.108 21:44:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.108 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.108 21:44:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.108 21:44:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.366 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:04.367 "name": "raid_bdev1", 00:12:04.367 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:12:04.367 "strip_size_kb": 0, 00:12:04.367 "state": "online", 00:12:04.367 "raid_level": "raid1", 00:12:04.367 "superblock": true, 00:12:04.367 "num_base_bdevs": 4, 00:12:04.367 "num_base_bdevs_discovered": 2, 00:12:04.367 "num_base_bdevs_operational": 2, 00:12:04.367 "base_bdevs_list": [ 00:12:04.367 { 00:12:04.367 "name": null, 00:12:04.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.367 "is_configured": false, 00:12:04.367 "data_offset": 0, 00:12:04.367 "data_size": 63488 00:12:04.367 }, 00:12:04.367 { 00:12:04.367 "name": null, 00:12:04.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.367 "is_configured": false, 00:12:04.367 "data_offset": 2048, 00:12:04.367 "data_size": 63488 00:12:04.367 }, 00:12:04.367 { 00:12:04.367 "name": "BaseBdev3", 00:12:04.367 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:12:04.367 "is_configured": true, 00:12:04.367 "data_offset": 2048, 00:12:04.367 "data_size": 63488 00:12:04.367 }, 00:12:04.367 { 00:12:04.367 "name": "BaseBdev4", 00:12:04.367 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:12:04.367 "is_configured": true, 00:12:04.367 "data_offset": 2048, 00:12:04.367 "data_size": 63488 00:12:04.367 } 00:12:04.367 ] 00:12:04.367 }' 00:12:04.367 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:04.367 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:04.367 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:04.367 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:04.367 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:04.367 21:44:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.367 21:44:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.367 21:44:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.367 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:04.367 21:44:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.367 21:44:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.367 [2024-11-27 21:44:27.342082] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:04.367 [2024-11-27 21:44:27.342134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:04.367 [2024-11-27 21:44:27.342157] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:04.367 [2024-11-27 21:44:27.342165] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:04.367 [2024-11-27 21:44:27.342557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:04.367 [2024-11-27 21:44:27.342574] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:04.367 [2024-11-27 21:44:27.342643] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:04.367 [2024-11-27 21:44:27.342657] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:04.367 [2024-11-27 21:44:27.342666] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:04.367 [2024-11-27 21:44:27.342675] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:04.367 BaseBdev1 00:12:04.367 21:44:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.367 21:44:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:05.303 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:05.303 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.303 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.303 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.303 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.303 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:05.303 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.303 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.304 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.304 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.304 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.304 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.304 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.304 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.304 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.304 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.304 "name": "raid_bdev1", 00:12:05.304 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:12:05.304 "strip_size_kb": 0, 00:12:05.304 "state": "online", 00:12:05.304 "raid_level": "raid1", 00:12:05.304 "superblock": true, 00:12:05.304 "num_base_bdevs": 4, 00:12:05.304 "num_base_bdevs_discovered": 2, 00:12:05.304 "num_base_bdevs_operational": 2, 00:12:05.304 "base_bdevs_list": [ 00:12:05.304 { 00:12:05.304 "name": null, 00:12:05.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.304 "is_configured": false, 00:12:05.304 "data_offset": 0, 00:12:05.304 "data_size": 63488 00:12:05.304 }, 00:12:05.304 { 00:12:05.304 "name": null, 00:12:05.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.304 "is_configured": false, 00:12:05.304 "data_offset": 2048, 00:12:05.304 "data_size": 63488 00:12:05.304 }, 00:12:05.304 { 00:12:05.304 "name": "BaseBdev3", 00:12:05.304 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:12:05.304 "is_configured": true, 00:12:05.304 "data_offset": 2048, 00:12:05.304 "data_size": 63488 00:12:05.304 }, 00:12:05.304 { 00:12:05.304 "name": "BaseBdev4", 00:12:05.304 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:12:05.304 "is_configured": true, 00:12:05.304 "data_offset": 2048, 00:12:05.304 "data_size": 63488 00:12:05.304 } 00:12:05.304 ] 00:12:05.304 }' 00:12:05.304 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.304 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.872 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:05.872 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.872 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:05.872 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:05.872 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.872 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.872 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.872 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.872 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.872 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.872 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.872 "name": "raid_bdev1", 00:12:05.872 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:12:05.872 "strip_size_kb": 0, 00:12:05.872 "state": "online", 00:12:05.872 "raid_level": "raid1", 00:12:05.872 "superblock": true, 00:12:05.872 "num_base_bdevs": 4, 00:12:05.872 "num_base_bdevs_discovered": 2, 00:12:05.872 "num_base_bdevs_operational": 2, 00:12:05.872 "base_bdevs_list": [ 00:12:05.872 { 00:12:05.872 "name": null, 00:12:05.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.872 "is_configured": false, 00:12:05.872 "data_offset": 0, 00:12:05.872 "data_size": 63488 00:12:05.872 }, 00:12:05.872 { 00:12:05.872 "name": null, 00:12:05.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.873 "is_configured": false, 00:12:05.873 "data_offset": 2048, 00:12:05.873 "data_size": 63488 00:12:05.873 }, 00:12:05.873 { 00:12:05.873 "name": "BaseBdev3", 00:12:05.873 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:12:05.873 "is_configured": true, 00:12:05.873 "data_offset": 2048, 00:12:05.873 "data_size": 63488 00:12:05.873 }, 00:12:05.873 { 00:12:05.873 "name": "BaseBdev4", 00:12:05.873 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:12:05.873 "is_configured": true, 00:12:05.873 "data_offset": 2048, 00:12:05.873 "data_size": 63488 00:12:05.873 } 00:12:05.873 ] 00:12:05.873 }' 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.873 [2024-11-27 21:44:28.947408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:05.873 [2024-11-27 21:44:28.947603] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:05.873 [2024-11-27 21:44:28.947626] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:05.873 request: 00:12:05.873 { 00:12:05.873 "base_bdev": "BaseBdev1", 00:12:05.873 "raid_bdev": "raid_bdev1", 00:12:05.873 "method": "bdev_raid_add_base_bdev", 00:12:05.873 "req_id": 1 00:12:05.873 } 00:12:05.873 Got JSON-RPC error response 00:12:05.873 response: 00:12:05.873 { 00:12:05.873 "code": -22, 00:12:05.873 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:05.873 } 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:05.873 21:44:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.255 21:44:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.255 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.255 "name": "raid_bdev1", 00:12:07.255 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:12:07.255 "strip_size_kb": 0, 00:12:07.255 "state": "online", 00:12:07.255 "raid_level": "raid1", 00:12:07.255 "superblock": true, 00:12:07.255 "num_base_bdevs": 4, 00:12:07.255 "num_base_bdevs_discovered": 2, 00:12:07.255 "num_base_bdevs_operational": 2, 00:12:07.255 "base_bdevs_list": [ 00:12:07.255 { 00:12:07.255 "name": null, 00:12:07.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.255 "is_configured": false, 00:12:07.255 "data_offset": 0, 00:12:07.255 "data_size": 63488 00:12:07.255 }, 00:12:07.255 { 00:12:07.255 "name": null, 00:12:07.255 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.255 "is_configured": false, 00:12:07.255 "data_offset": 2048, 00:12:07.255 "data_size": 63488 00:12:07.255 }, 00:12:07.255 { 00:12:07.255 "name": "BaseBdev3", 00:12:07.255 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:12:07.255 "is_configured": true, 00:12:07.255 "data_offset": 2048, 00:12:07.255 "data_size": 63488 00:12:07.255 }, 00:12:07.255 { 00:12:07.255 "name": "BaseBdev4", 00:12:07.255 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:12:07.255 "is_configured": true, 00:12:07.255 "data_offset": 2048, 00:12:07.255 "data_size": 63488 00:12:07.255 } 00:12:07.255 ] 00:12:07.255 }' 00:12:07.255 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.255 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.516 "name": "raid_bdev1", 00:12:07.516 "uuid": "628b25bc-92e9-49c7-b4d6-bb8a1ed59fb7", 00:12:07.516 "strip_size_kb": 0, 00:12:07.516 "state": "online", 00:12:07.516 "raid_level": "raid1", 00:12:07.516 "superblock": true, 00:12:07.516 "num_base_bdevs": 4, 00:12:07.516 "num_base_bdevs_discovered": 2, 00:12:07.516 "num_base_bdevs_operational": 2, 00:12:07.516 "base_bdevs_list": [ 00:12:07.516 { 00:12:07.516 "name": null, 00:12:07.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.516 "is_configured": false, 00:12:07.516 "data_offset": 0, 00:12:07.516 "data_size": 63488 00:12:07.516 }, 00:12:07.516 { 00:12:07.516 "name": null, 00:12:07.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:07.516 "is_configured": false, 00:12:07.516 "data_offset": 2048, 00:12:07.516 "data_size": 63488 00:12:07.516 }, 00:12:07.516 { 00:12:07.516 "name": "BaseBdev3", 00:12:07.516 "uuid": "c1d015f7-6684-5b12-9972-47b7eccb11df", 00:12:07.516 "is_configured": true, 00:12:07.516 "data_offset": 2048, 00:12:07.516 "data_size": 63488 00:12:07.516 }, 00:12:07.516 { 00:12:07.516 "name": "BaseBdev4", 00:12:07.516 "uuid": "be4716f6-6ad4-5d76-97a1-e56b5a289893", 00:12:07.516 "is_configured": true, 00:12:07.516 "data_offset": 2048, 00:12:07.516 "data_size": 63488 00:12:07.516 } 00:12:07.516 ] 00:12:07.516 }' 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 88299 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 88299 ']' 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 88299 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88299 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.516 killing process with pid 88299 00:12:07.516 Received shutdown signal, test time was about 60.000000 seconds 00:12:07.516 00:12:07.516 Latency(us) 00:12:07.516 [2024-11-27T21:44:30.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:07.516 [2024-11-27T21:44:30.637Z] =================================================================================================================== 00:12:07.516 [2024-11-27T21:44:30.637Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88299' 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 88299 00:12:07.516 [2024-11-27 21:44:30.566405] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.516 [2024-11-27 21:44:30.566553] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.516 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 88299 00:12:07.516 [2024-11-27 21:44:30.566624] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.516 [2024-11-27 21:44:30.566641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:07.516 [2024-11-27 21:44:30.616456] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.775 21:44:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:07.775 00:12:07.775 real 0m23.048s 00:12:07.775 user 0m28.165s 00:12:07.775 sys 0m3.511s 00:12:07.775 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.775 ************************************ 00:12:07.775 END TEST raid_rebuild_test_sb 00:12:07.775 ************************************ 00:12:07.775 21:44:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.775 21:44:30 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:12:07.775 21:44:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:07.775 21:44:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.775 21:44:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:08.034 ************************************ 00:12:08.034 START TEST raid_rebuild_test_io 00:12:08.034 ************************************ 00:12:08.034 21:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89032 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89032 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 89032 ']' 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.035 21:44:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.035 [2024-11-27 21:44:30.994081] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:08.035 [2024-11-27 21:44:30.994278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89032 ] 00:12:08.035 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:08.035 Zero copy mechanism will not be used. 00:12:08.035 [2024-11-27 21:44:31.150289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.294 [2024-11-27 21:44:31.178768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.294 [2024-11-27 21:44:31.220725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.294 [2024-11-27 21:44:31.220851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.863 BaseBdev1_malloc 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.863 [2024-11-27 21:44:31.832756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:08.863 [2024-11-27 21:44:31.832826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.863 [2024-11-27 21:44:31.832873] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:08.863 [2024-11-27 21:44:31.832886] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.863 [2024-11-27 21:44:31.835040] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.863 [2024-11-27 21:44:31.835078] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:08.863 BaseBdev1 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.863 BaseBdev2_malloc 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.863 [2024-11-27 21:44:31.861969] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:08.863 [2024-11-27 21:44:31.862033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.863 [2024-11-27 21:44:31.862059] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:08.863 [2024-11-27 21:44:31.862068] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.863 [2024-11-27 21:44:31.864208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.863 [2024-11-27 21:44:31.864249] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:08.863 BaseBdev2 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.863 BaseBdev3_malloc 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.863 [2024-11-27 21:44:31.890715] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:08.863 [2024-11-27 21:44:31.890779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.863 [2024-11-27 21:44:31.890816] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:08.863 [2024-11-27 21:44:31.890826] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.863 [2024-11-27 21:44:31.893020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.863 [2024-11-27 21:44:31.893112] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:08.863 BaseBdev3 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.863 BaseBdev4_malloc 00:12:08.863 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.864 [2024-11-27 21:44:31.930853] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:08.864 [2024-11-27 21:44:31.930904] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.864 [2024-11-27 21:44:31.930925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:08.864 [2024-11-27 21:44:31.930934] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.864 [2024-11-27 21:44:31.932980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.864 [2024-11-27 21:44:31.933074] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:08.864 BaseBdev4 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.864 spare_malloc 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.864 spare_delay 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:08.864 [2024-11-27 21:44:31.971322] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:08.864 [2024-11-27 21:44:31.971379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.864 [2024-11-27 21:44:31.971400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:08.864 [2024-11-27 21:44:31.971409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.864 [2024-11-27 21:44:31.973613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.864 [2024-11-27 21:44:31.973649] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:08.864 spare 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.864 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.125 [2024-11-27 21:44:31.983382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.125 [2024-11-27 21:44:31.985342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.125 [2024-11-27 21:44:31.985406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:09.125 [2024-11-27 21:44:31.985453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:09.125 [2024-11-27 21:44:31.985537] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:09.125 [2024-11-27 21:44:31.985546] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:09.125 [2024-11-27 21:44:31.985798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:09.125 [2024-11-27 21:44:31.985964] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:09.125 [2024-11-27 21:44:31.985976] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:09.125 [2024-11-27 21:44:31.986128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.125 21:44:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.125 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.125 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.125 "name": "raid_bdev1", 00:12:09.125 "uuid": "7002e96a-fda9-49dd-b045-d5d4860397a6", 00:12:09.125 "strip_size_kb": 0, 00:12:09.125 "state": "online", 00:12:09.125 "raid_level": "raid1", 00:12:09.125 "superblock": false, 00:12:09.125 "num_base_bdevs": 4, 00:12:09.125 "num_base_bdevs_discovered": 4, 00:12:09.125 "num_base_bdevs_operational": 4, 00:12:09.125 "base_bdevs_list": [ 00:12:09.125 { 00:12:09.125 "name": "BaseBdev1", 00:12:09.125 "uuid": "34cb9055-40fc-5354-a481-d48cd788e779", 00:12:09.125 "is_configured": true, 00:12:09.125 "data_offset": 0, 00:12:09.125 "data_size": 65536 00:12:09.125 }, 00:12:09.125 { 00:12:09.125 "name": "BaseBdev2", 00:12:09.125 "uuid": "87ddbb6d-b3a0-540f-a2dd-5d8354877834", 00:12:09.125 "is_configured": true, 00:12:09.125 "data_offset": 0, 00:12:09.125 "data_size": 65536 00:12:09.125 }, 00:12:09.125 { 00:12:09.125 "name": "BaseBdev3", 00:12:09.125 "uuid": "28be8300-a4ca-559f-ba10-25dd5dd66399", 00:12:09.125 "is_configured": true, 00:12:09.125 "data_offset": 0, 00:12:09.125 "data_size": 65536 00:12:09.125 }, 00:12:09.125 { 00:12:09.125 "name": "BaseBdev4", 00:12:09.125 "uuid": "03573bf3-cc33-5c82-a695-798296a347f5", 00:12:09.125 "is_configured": true, 00:12:09.125 "data_offset": 0, 00:12:09.125 "data_size": 65536 00:12:09.125 } 00:12:09.125 ] 00:12:09.125 }' 00:12:09.125 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.125 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.385 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:09.385 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:09.385 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.385 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.385 [2024-11-27 21:44:32.478847] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.385 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.385 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.645 [2024-11-27 21:44:32.558392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.645 "name": "raid_bdev1", 00:12:09.645 "uuid": "7002e96a-fda9-49dd-b045-d5d4860397a6", 00:12:09.645 "strip_size_kb": 0, 00:12:09.645 "state": "online", 00:12:09.645 "raid_level": "raid1", 00:12:09.645 "superblock": false, 00:12:09.645 "num_base_bdevs": 4, 00:12:09.645 "num_base_bdevs_discovered": 3, 00:12:09.645 "num_base_bdevs_operational": 3, 00:12:09.645 "base_bdevs_list": [ 00:12:09.645 { 00:12:09.645 "name": null, 00:12:09.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.645 "is_configured": false, 00:12:09.645 "data_offset": 0, 00:12:09.645 "data_size": 65536 00:12:09.645 }, 00:12:09.645 { 00:12:09.645 "name": "BaseBdev2", 00:12:09.645 "uuid": "87ddbb6d-b3a0-540f-a2dd-5d8354877834", 00:12:09.645 "is_configured": true, 00:12:09.645 "data_offset": 0, 00:12:09.645 "data_size": 65536 00:12:09.645 }, 00:12:09.645 { 00:12:09.645 "name": "BaseBdev3", 00:12:09.645 "uuid": "28be8300-a4ca-559f-ba10-25dd5dd66399", 00:12:09.645 "is_configured": true, 00:12:09.645 "data_offset": 0, 00:12:09.645 "data_size": 65536 00:12:09.645 }, 00:12:09.645 { 00:12:09.645 "name": "BaseBdev4", 00:12:09.645 "uuid": "03573bf3-cc33-5c82-a695-798296a347f5", 00:12:09.645 "is_configured": true, 00:12:09.645 "data_offset": 0, 00:12:09.645 "data_size": 65536 00:12:09.645 } 00:12:09.645 ] 00:12:09.645 }' 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.645 21:44:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:09.645 [2024-11-27 21:44:32.648248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:09.645 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:09.645 Zero copy mechanism will not be used. 00:12:09.645 Running I/O for 60 seconds... 00:12:09.904 21:44:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:09.904 21:44:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.904 21:44:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:10.167 [2024-11-27 21:44:33.026709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:10.167 21:44:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.167 21:44:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:10.167 [2024-11-27 21:44:33.064201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:12:10.167 [2024-11-27 21:44:33.066170] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:10.167 [2024-11-27 21:44:33.181693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:10.167 [2024-11-27 21:44:33.182942] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:10.428 [2024-11-27 21:44:33.398209] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:10.428 [2024-11-27 21:44:33.399089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:10.686 167.00 IOPS, 501.00 MiB/s [2024-11-27T21:44:33.807Z] [2024-11-27 21:44:33.731338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:10.686 [2024-11-27 21:44:33.732862] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:10.946 [2024-11-27 21:44:33.943062] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:10.946 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:10.946 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:10.946 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:10.946 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:10.946 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:10.946 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.946 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.946 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.946 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.206 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.206 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.206 "name": "raid_bdev1", 00:12:11.206 "uuid": "7002e96a-fda9-49dd-b045-d5d4860397a6", 00:12:11.206 "strip_size_kb": 0, 00:12:11.206 "state": "online", 00:12:11.206 "raid_level": "raid1", 00:12:11.206 "superblock": false, 00:12:11.206 "num_base_bdevs": 4, 00:12:11.206 "num_base_bdevs_discovered": 4, 00:12:11.206 "num_base_bdevs_operational": 4, 00:12:11.206 "process": { 00:12:11.206 "type": "rebuild", 00:12:11.206 "target": "spare", 00:12:11.206 "progress": { 00:12:11.206 "blocks": 10240, 00:12:11.206 "percent": 15 00:12:11.206 } 00:12:11.206 }, 00:12:11.206 "base_bdevs_list": [ 00:12:11.206 { 00:12:11.206 "name": "spare", 00:12:11.206 "uuid": "baf6ce63-b1de-5287-98fb-7fb113c06928", 00:12:11.206 "is_configured": true, 00:12:11.206 "data_offset": 0, 00:12:11.206 "data_size": 65536 00:12:11.206 }, 00:12:11.206 { 00:12:11.206 "name": "BaseBdev2", 00:12:11.206 "uuid": "87ddbb6d-b3a0-540f-a2dd-5d8354877834", 00:12:11.206 "is_configured": true, 00:12:11.206 "data_offset": 0, 00:12:11.206 "data_size": 65536 00:12:11.206 }, 00:12:11.206 { 00:12:11.206 "name": "BaseBdev3", 00:12:11.206 "uuid": "28be8300-a4ca-559f-ba10-25dd5dd66399", 00:12:11.206 "is_configured": true, 00:12:11.206 "data_offset": 0, 00:12:11.206 "data_size": 65536 00:12:11.206 }, 00:12:11.206 { 00:12:11.206 "name": "BaseBdev4", 00:12:11.206 "uuid": "03573bf3-cc33-5c82-a695-798296a347f5", 00:12:11.206 "is_configured": true, 00:12:11.206 "data_offset": 0, 00:12:11.206 "data_size": 65536 00:12:11.206 } 00:12:11.206 ] 00:12:11.206 }' 00:12:11.206 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.206 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.206 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.206 [2024-11-27 21:44:34.199938] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:11.206 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.206 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:11.206 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.206 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.206 [2024-11-27 21:44:34.215433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:11.466 [2024-11-27 21:44:34.329546] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:11.466 [2024-11-27 21:44:34.339940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.466 [2024-11-27 21:44:34.340110] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:11.466 [2024-11-27 21:44:34.340141] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:11.466 [2024-11-27 21:44:34.357395] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.466 "name": "raid_bdev1", 00:12:11.466 "uuid": "7002e96a-fda9-49dd-b045-d5d4860397a6", 00:12:11.466 "strip_size_kb": 0, 00:12:11.466 "state": "online", 00:12:11.466 "raid_level": "raid1", 00:12:11.466 "superblock": false, 00:12:11.466 "num_base_bdevs": 4, 00:12:11.466 "num_base_bdevs_discovered": 3, 00:12:11.466 "num_base_bdevs_operational": 3, 00:12:11.466 "base_bdevs_list": [ 00:12:11.466 { 00:12:11.466 "name": null, 00:12:11.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.466 "is_configured": false, 00:12:11.466 "data_offset": 0, 00:12:11.466 "data_size": 65536 00:12:11.466 }, 00:12:11.466 { 00:12:11.466 "name": "BaseBdev2", 00:12:11.466 "uuid": "87ddbb6d-b3a0-540f-a2dd-5d8354877834", 00:12:11.466 "is_configured": true, 00:12:11.466 "data_offset": 0, 00:12:11.466 "data_size": 65536 00:12:11.466 }, 00:12:11.466 { 00:12:11.466 "name": "BaseBdev3", 00:12:11.466 "uuid": "28be8300-a4ca-559f-ba10-25dd5dd66399", 00:12:11.466 "is_configured": true, 00:12:11.466 "data_offset": 0, 00:12:11.466 "data_size": 65536 00:12:11.466 }, 00:12:11.466 { 00:12:11.466 "name": "BaseBdev4", 00:12:11.466 "uuid": "03573bf3-cc33-5c82-a695-798296a347f5", 00:12:11.466 "is_configured": true, 00:12:11.466 "data_offset": 0, 00:12:11.466 "data_size": 65536 00:12:11.466 } 00:12:11.466 ] 00:12:11.466 }' 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.466 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.726 149.00 IOPS, 447.00 MiB/s [2024-11-27T21:44:34.847Z] 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:11.726 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.726 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:11.726 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:11.726 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.726 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.726 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.726 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.726 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.726 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.726 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.726 "name": "raid_bdev1", 00:12:11.726 "uuid": "7002e96a-fda9-49dd-b045-d5d4860397a6", 00:12:11.726 "strip_size_kb": 0, 00:12:11.726 "state": "online", 00:12:11.726 "raid_level": "raid1", 00:12:11.726 "superblock": false, 00:12:11.726 "num_base_bdevs": 4, 00:12:11.726 "num_base_bdevs_discovered": 3, 00:12:11.726 "num_base_bdevs_operational": 3, 00:12:11.726 "base_bdevs_list": [ 00:12:11.726 { 00:12:11.726 "name": null, 00:12:11.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.726 "is_configured": false, 00:12:11.726 "data_offset": 0, 00:12:11.726 "data_size": 65536 00:12:11.726 }, 00:12:11.726 { 00:12:11.726 "name": "BaseBdev2", 00:12:11.726 "uuid": "87ddbb6d-b3a0-540f-a2dd-5d8354877834", 00:12:11.726 "is_configured": true, 00:12:11.726 "data_offset": 0, 00:12:11.726 "data_size": 65536 00:12:11.726 }, 00:12:11.726 { 00:12:11.726 "name": "BaseBdev3", 00:12:11.726 "uuid": "28be8300-a4ca-559f-ba10-25dd5dd66399", 00:12:11.726 "is_configured": true, 00:12:11.726 "data_offset": 0, 00:12:11.726 "data_size": 65536 00:12:11.726 }, 00:12:11.726 { 00:12:11.726 "name": "BaseBdev4", 00:12:11.726 "uuid": "03573bf3-cc33-5c82-a695-798296a347f5", 00:12:11.726 "is_configured": true, 00:12:11.726 "data_offset": 0, 00:12:11.726 "data_size": 65536 00:12:11.726 } 00:12:11.726 ] 00:12:11.726 }' 00:12:11.987 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.987 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:11.987 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.987 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:11.987 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:11.987 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.987 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:11.987 [2024-11-27 21:44:34.933909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:11.987 21:44:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.987 21:44:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:11.987 [2024-11-27 21:44:34.996721] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:12:11.987 [2024-11-27 21:44:34.998699] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:12.247 [2024-11-27 21:44:35.118513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:12.247 [2024-11-27 21:44:35.119920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:12.247 [2024-11-27 21:44:35.329981] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:12.247 [2024-11-27 21:44:35.330288] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:12.817 155.00 IOPS, 465.00 MiB/s [2024-11-27T21:44:35.938Z] [2024-11-27 21:44:35.663199] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:12.817 [2024-11-27 21:44:35.664396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:12.817 [2024-11-27 21:44:35.897513] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:13.078 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.078 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.078 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.078 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.078 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.078 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.078 21:44:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.078 21:44:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.078 21:44:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.078 21:44:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.078 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.078 "name": "raid_bdev1", 00:12:13.078 "uuid": "7002e96a-fda9-49dd-b045-d5d4860397a6", 00:12:13.078 "strip_size_kb": 0, 00:12:13.078 "state": "online", 00:12:13.078 "raid_level": "raid1", 00:12:13.078 "superblock": false, 00:12:13.078 "num_base_bdevs": 4, 00:12:13.078 "num_base_bdevs_discovered": 4, 00:12:13.078 "num_base_bdevs_operational": 4, 00:12:13.078 "process": { 00:12:13.078 "type": "rebuild", 00:12:13.078 "target": "spare", 00:12:13.078 "progress": { 00:12:13.078 "blocks": 10240, 00:12:13.078 "percent": 15 00:12:13.078 } 00:12:13.078 }, 00:12:13.078 "base_bdevs_list": [ 00:12:13.078 { 00:12:13.078 "name": "spare", 00:12:13.078 "uuid": "baf6ce63-b1de-5287-98fb-7fb113c06928", 00:12:13.078 "is_configured": true, 00:12:13.078 "data_offset": 0, 00:12:13.078 "data_size": 65536 00:12:13.078 }, 00:12:13.078 { 00:12:13.078 "name": "BaseBdev2", 00:12:13.078 "uuid": "87ddbb6d-b3a0-540f-a2dd-5d8354877834", 00:12:13.078 "is_configured": true, 00:12:13.078 "data_offset": 0, 00:12:13.078 "data_size": 65536 00:12:13.078 }, 00:12:13.078 { 00:12:13.079 "name": "BaseBdev3", 00:12:13.079 "uuid": "28be8300-a4ca-559f-ba10-25dd5dd66399", 00:12:13.079 "is_configured": true, 00:12:13.079 "data_offset": 0, 00:12:13.079 "data_size": 65536 00:12:13.079 }, 00:12:13.079 { 00:12:13.079 "name": "BaseBdev4", 00:12:13.079 "uuid": "03573bf3-cc33-5c82-a695-798296a347f5", 00:12:13.079 "is_configured": true, 00:12:13.079 "data_offset": 0, 00:12:13.079 "data_size": 65536 00:12:13.079 } 00:12:13.079 ] 00:12:13.079 }' 00:12:13.079 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.079 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.079 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.079 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.079 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:13.079 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:13.079 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:13.079 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:13.079 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:13.079 21:44:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.079 21:44:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.079 [2024-11-27 21:44:36.124693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:13.339 [2024-11-27 21:44:36.239696] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:12:13.339 [2024-11-27 21:44:36.239846] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.339 "name": "raid_bdev1", 00:12:13.339 "uuid": "7002e96a-fda9-49dd-b045-d5d4860397a6", 00:12:13.339 "strip_size_kb": 0, 00:12:13.339 "state": "online", 00:12:13.339 "raid_level": "raid1", 00:12:13.339 "superblock": false, 00:12:13.339 "num_base_bdevs": 4, 00:12:13.339 "num_base_bdevs_discovered": 3, 00:12:13.339 "num_base_bdevs_operational": 3, 00:12:13.339 "process": { 00:12:13.339 "type": "rebuild", 00:12:13.339 "target": "spare", 00:12:13.339 "progress": { 00:12:13.339 "blocks": 12288, 00:12:13.339 "percent": 18 00:12:13.339 } 00:12:13.339 }, 00:12:13.339 "base_bdevs_list": [ 00:12:13.339 { 00:12:13.339 "name": "spare", 00:12:13.339 "uuid": "baf6ce63-b1de-5287-98fb-7fb113c06928", 00:12:13.339 "is_configured": true, 00:12:13.339 "data_offset": 0, 00:12:13.339 "data_size": 65536 00:12:13.339 }, 00:12:13.339 { 00:12:13.339 "name": null, 00:12:13.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.339 "is_configured": false, 00:12:13.339 "data_offset": 0, 00:12:13.339 "data_size": 65536 00:12:13.339 }, 00:12:13.339 { 00:12:13.339 "name": "BaseBdev3", 00:12:13.339 "uuid": "28be8300-a4ca-559f-ba10-25dd5dd66399", 00:12:13.339 "is_configured": true, 00:12:13.339 "data_offset": 0, 00:12:13.339 "data_size": 65536 00:12:13.339 }, 00:12:13.339 { 00:12:13.339 "name": "BaseBdev4", 00:12:13.339 "uuid": "03573bf3-cc33-5c82-a695-798296a347f5", 00:12:13.339 "is_configured": true, 00:12:13.339 "data_offset": 0, 00:12:13.339 "data_size": 65536 00:12:13.339 } 00:12:13.339 ] 00:12:13.339 }' 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.339 [2024-11-27 21:44:36.365181] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=385 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.339 "name": "raid_bdev1", 00:12:13.339 "uuid": "7002e96a-fda9-49dd-b045-d5d4860397a6", 00:12:13.339 "strip_size_kb": 0, 00:12:13.339 "state": "online", 00:12:13.339 "raid_level": "raid1", 00:12:13.339 "superblock": false, 00:12:13.339 "num_base_bdevs": 4, 00:12:13.339 "num_base_bdevs_discovered": 3, 00:12:13.339 "num_base_bdevs_operational": 3, 00:12:13.339 "process": { 00:12:13.339 "type": "rebuild", 00:12:13.339 "target": "spare", 00:12:13.339 "progress": { 00:12:13.339 "blocks": 14336, 00:12:13.339 "percent": 21 00:12:13.339 } 00:12:13.339 }, 00:12:13.339 "base_bdevs_list": [ 00:12:13.339 { 00:12:13.339 "name": "spare", 00:12:13.339 "uuid": "baf6ce63-b1de-5287-98fb-7fb113c06928", 00:12:13.339 "is_configured": true, 00:12:13.339 "data_offset": 0, 00:12:13.339 "data_size": 65536 00:12:13.339 }, 00:12:13.339 { 00:12:13.339 "name": null, 00:12:13.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.339 "is_configured": false, 00:12:13.339 "data_offset": 0, 00:12:13.339 "data_size": 65536 00:12:13.339 }, 00:12:13.339 { 00:12:13.339 "name": "BaseBdev3", 00:12:13.339 "uuid": "28be8300-a4ca-559f-ba10-25dd5dd66399", 00:12:13.339 "is_configured": true, 00:12:13.339 "data_offset": 0, 00:12:13.339 "data_size": 65536 00:12:13.339 }, 00:12:13.339 { 00:12:13.339 "name": "BaseBdev4", 00:12:13.339 "uuid": "03573bf3-cc33-5c82-a695-798296a347f5", 00:12:13.339 "is_configured": true, 00:12:13.339 "data_offset": 0, 00:12:13.339 "data_size": 65536 00:12:13.339 } 00:12:13.339 ] 00:12:13.339 }' 00:12:13.339 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.599 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.599 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.599 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.599 21:44:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:13.859 132.50 IOPS, 397.50 MiB/s [2024-11-27T21:44:36.980Z] [2024-11-27 21:44:36.735227] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:13.859 [2024-11-27 21:44:36.951140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:13.859 [2024-11-27 21:44:36.951837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:14.428 [2024-11-27 21:44:37.285465] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:14.428 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.689 "name": "raid_bdev1", 00:12:14.689 "uuid": "7002e96a-fda9-49dd-b045-d5d4860397a6", 00:12:14.689 "strip_size_kb": 0, 00:12:14.689 "state": "online", 00:12:14.689 "raid_level": "raid1", 00:12:14.689 "superblock": false, 00:12:14.689 "num_base_bdevs": 4, 00:12:14.689 "num_base_bdevs_discovered": 3, 00:12:14.689 "num_base_bdevs_operational": 3, 00:12:14.689 "process": { 00:12:14.689 "type": "rebuild", 00:12:14.689 "target": "spare", 00:12:14.689 "progress": { 00:12:14.689 "blocks": 30720, 00:12:14.689 "percent": 46 00:12:14.689 } 00:12:14.689 }, 00:12:14.689 "base_bdevs_list": [ 00:12:14.689 { 00:12:14.689 "name": "spare", 00:12:14.689 "uuid": "baf6ce63-b1de-5287-98fb-7fb113c06928", 00:12:14.689 "is_configured": true, 00:12:14.689 "data_offset": 0, 00:12:14.689 "data_size": 65536 00:12:14.689 }, 00:12:14.689 { 00:12:14.689 "name": null, 00:12:14.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.689 "is_configured": false, 00:12:14.689 "data_offset": 0, 00:12:14.689 "data_size": 65536 00:12:14.689 }, 00:12:14.689 { 00:12:14.689 "name": "BaseBdev3", 00:12:14.689 "uuid": "28be8300-a4ca-559f-ba10-25dd5dd66399", 00:12:14.689 "is_configured": true, 00:12:14.689 "data_offset": 0, 00:12:14.689 "data_size": 65536 00:12:14.689 }, 00:12:14.689 { 00:12:14.689 "name": "BaseBdev4", 00:12:14.689 "uuid": "03573bf3-cc33-5c82-a695-798296a347f5", 00:12:14.689 "is_configured": true, 00:12:14.689 "data_offset": 0, 00:12:14.689 "data_size": 65536 00:12:14.689 } 00:12:14.689 ] 00:12:14.689 }' 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.689 [2024-11-27 21:44:37.641128] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:14.689 [2024-11-27 21:44:37.642060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.689 117.80 IOPS, 353.40 MiB/s [2024-11-27T21:44:37.810Z] 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:14.689 21:44:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:14.950 [2024-11-27 21:44:37.851549] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:14.950 [2024-11-27 21:44:37.852262] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:15.537 [2024-11-27 21:44:38.609473] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:15.537 [2024-11-27 21:44:38.609968] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:15.797 104.67 IOPS, 314.00 MiB/s [2024-11-27T21:44:38.918Z] 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.797 "name": "raid_bdev1", 00:12:15.797 "uuid": "7002e96a-fda9-49dd-b045-d5d4860397a6", 00:12:15.797 "strip_size_kb": 0, 00:12:15.797 "state": "online", 00:12:15.797 "raid_level": "raid1", 00:12:15.797 "superblock": false, 00:12:15.797 "num_base_bdevs": 4, 00:12:15.797 "num_base_bdevs_discovered": 3, 00:12:15.797 "num_base_bdevs_operational": 3, 00:12:15.797 "process": { 00:12:15.797 "type": "rebuild", 00:12:15.797 "target": "spare", 00:12:15.797 "progress": { 00:12:15.797 "blocks": 47104, 00:12:15.797 "percent": 71 00:12:15.797 } 00:12:15.797 }, 00:12:15.797 "base_bdevs_list": [ 00:12:15.797 { 00:12:15.797 "name": "spare", 00:12:15.797 "uuid": "baf6ce63-b1de-5287-98fb-7fb113c06928", 00:12:15.797 "is_configured": true, 00:12:15.797 "data_offset": 0, 00:12:15.797 "data_size": 65536 00:12:15.797 }, 00:12:15.797 { 00:12:15.797 "name": null, 00:12:15.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.797 "is_configured": false, 00:12:15.797 "data_offset": 0, 00:12:15.797 "data_size": 65536 00:12:15.797 }, 00:12:15.797 { 00:12:15.797 "name": "BaseBdev3", 00:12:15.797 "uuid": "28be8300-a4ca-559f-ba10-25dd5dd66399", 00:12:15.797 "is_configured": true, 00:12:15.797 "data_offset": 0, 00:12:15.797 "data_size": 65536 00:12:15.797 }, 00:12:15.797 { 00:12:15.797 "name": "BaseBdev4", 00:12:15.797 "uuid": "03573bf3-cc33-5c82-a695-798296a347f5", 00:12:15.797 "is_configured": true, 00:12:15.797 "data_offset": 0, 00:12:15.797 "data_size": 65536 00:12:15.797 } 00:12:15.797 ] 00:12:15.797 }' 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.797 21:44:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:16.366 [2024-11-27 21:44:39.359147] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:16.626 97.57 IOPS, 292.71 MiB/s [2024-11-27T21:44:39.747Z] [2024-11-27 21:44:39.662333] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:16.626 [2024-11-27 21:44:39.691700] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:16.626 [2024-11-27 21:44:39.694541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.885 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:16.885 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.885 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.885 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.885 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.885 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.885 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.885 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.885 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.885 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:16.885 21:44:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.885 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.885 "name": "raid_bdev1", 00:12:16.885 "uuid": "7002e96a-fda9-49dd-b045-d5d4860397a6", 00:12:16.885 "strip_size_kb": 0, 00:12:16.885 "state": "online", 00:12:16.885 "raid_level": "raid1", 00:12:16.885 "superblock": false, 00:12:16.885 "num_base_bdevs": 4, 00:12:16.885 "num_base_bdevs_discovered": 3, 00:12:16.885 "num_base_bdevs_operational": 3, 00:12:16.885 "base_bdevs_list": [ 00:12:16.885 { 00:12:16.885 "name": "spare", 00:12:16.885 "uuid": "baf6ce63-b1de-5287-98fb-7fb113c06928", 00:12:16.885 "is_configured": true, 00:12:16.885 "data_offset": 0, 00:12:16.885 "data_size": 65536 00:12:16.885 }, 00:12:16.885 { 00:12:16.885 "name": null, 00:12:16.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.885 "is_configured": false, 00:12:16.885 "data_offset": 0, 00:12:16.885 "data_size": 65536 00:12:16.885 }, 00:12:16.886 { 00:12:16.886 "name": "BaseBdev3", 00:12:16.886 "uuid": "28be8300-a4ca-559f-ba10-25dd5dd66399", 00:12:16.886 "is_configured": true, 00:12:16.886 "data_offset": 0, 00:12:16.886 "data_size": 65536 00:12:16.886 }, 00:12:16.886 { 00:12:16.886 "name": "BaseBdev4", 00:12:16.886 "uuid": "03573bf3-cc33-5c82-a695-798296a347f5", 00:12:16.886 "is_configured": true, 00:12:16.886 "data_offset": 0, 00:12:16.886 "data_size": 65536 00:12:16.886 } 00:12:16.886 ] 00:12:16.886 }' 00:12:16.886 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.886 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:16.886 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.886 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:16.886 21:44:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:16.886 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:16.886 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.886 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:16.886 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:16.886 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.146 "name": "raid_bdev1", 00:12:17.146 "uuid": "7002e96a-fda9-49dd-b045-d5d4860397a6", 00:12:17.146 "strip_size_kb": 0, 00:12:17.146 "state": "online", 00:12:17.146 "raid_level": "raid1", 00:12:17.146 "superblock": false, 00:12:17.146 "num_base_bdevs": 4, 00:12:17.146 "num_base_bdevs_discovered": 3, 00:12:17.146 "num_base_bdevs_operational": 3, 00:12:17.146 "base_bdevs_list": [ 00:12:17.146 { 00:12:17.146 "name": "spare", 00:12:17.146 "uuid": "baf6ce63-b1de-5287-98fb-7fb113c06928", 00:12:17.146 "is_configured": true, 00:12:17.146 "data_offset": 0, 00:12:17.146 "data_size": 65536 00:12:17.146 }, 00:12:17.146 { 00:12:17.146 "name": null, 00:12:17.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.146 "is_configured": false, 00:12:17.146 "data_offset": 0, 00:12:17.146 "data_size": 65536 00:12:17.146 }, 00:12:17.146 { 00:12:17.146 "name": "BaseBdev3", 00:12:17.146 "uuid": "28be8300-a4ca-559f-ba10-25dd5dd66399", 00:12:17.146 "is_configured": true, 00:12:17.146 "data_offset": 0, 00:12:17.146 "data_size": 65536 00:12:17.146 }, 00:12:17.146 { 00:12:17.146 "name": "BaseBdev4", 00:12:17.146 "uuid": "03573bf3-cc33-5c82-a695-798296a347f5", 00:12:17.146 "is_configured": true, 00:12:17.146 "data_offset": 0, 00:12:17.146 "data_size": 65536 00:12:17.146 } 00:12:17.146 ] 00:12:17.146 }' 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.146 "name": "raid_bdev1", 00:12:17.146 "uuid": "7002e96a-fda9-49dd-b045-d5d4860397a6", 00:12:17.146 "strip_size_kb": 0, 00:12:17.146 "state": "online", 00:12:17.146 "raid_level": "raid1", 00:12:17.146 "superblock": false, 00:12:17.146 "num_base_bdevs": 4, 00:12:17.146 "num_base_bdevs_discovered": 3, 00:12:17.146 "num_base_bdevs_operational": 3, 00:12:17.146 "base_bdevs_list": [ 00:12:17.146 { 00:12:17.146 "name": "spare", 00:12:17.146 "uuid": "baf6ce63-b1de-5287-98fb-7fb113c06928", 00:12:17.146 "is_configured": true, 00:12:17.146 "data_offset": 0, 00:12:17.146 "data_size": 65536 00:12:17.146 }, 00:12:17.146 { 00:12:17.146 "name": null, 00:12:17.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.146 "is_configured": false, 00:12:17.146 "data_offset": 0, 00:12:17.146 "data_size": 65536 00:12:17.146 }, 00:12:17.146 { 00:12:17.146 "name": "BaseBdev3", 00:12:17.146 "uuid": "28be8300-a4ca-559f-ba10-25dd5dd66399", 00:12:17.146 "is_configured": true, 00:12:17.146 "data_offset": 0, 00:12:17.146 "data_size": 65536 00:12:17.146 }, 00:12:17.146 { 00:12:17.146 "name": "BaseBdev4", 00:12:17.146 "uuid": "03573bf3-cc33-5c82-a695-798296a347f5", 00:12:17.146 "is_configured": true, 00:12:17.146 "data_offset": 0, 00:12:17.146 "data_size": 65536 00:12:17.146 } 00:12:17.146 ] 00:12:17.146 }' 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.146 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.717 [2024-11-27 21:44:40.545421] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:17.717 [2024-11-27 21:44:40.545506] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.717 00:12:17.717 Latency(us) 00:12:17.717 [2024-11-27T21:44:40.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.717 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:17.717 raid_bdev1 : 7.92 90.38 271.14 0.00 0.00 14435.23 287.97 118136.51 00:12:17.717 [2024-11-27T21:44:40.838Z] =================================================================================================================== 00:12:17.717 [2024-11-27T21:44:40.838Z] Total : 90.38 271.14 0.00 0.00 14435.23 287.97 118136.51 00:12:17.717 [2024-11-27 21:44:40.560685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.717 [2024-11-27 21:44:40.560781] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.717 [2024-11-27 21:44:40.560950] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.717 [2024-11-27 21:44:40.561002] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:17.717 { 00:12:17.717 "results": [ 00:12:17.717 { 00:12:17.717 "job": "raid_bdev1", 00:12:17.717 "core_mask": "0x1", 00:12:17.717 "workload": "randrw", 00:12:17.717 "percentage": 50, 00:12:17.717 "status": "finished", 00:12:17.717 "queue_depth": 2, 00:12:17.717 "io_size": 3145728, 00:12:17.717 "runtime": 7.922111, 00:12:17.717 "iops": 90.37995049551817, 00:12:17.717 "mibps": 271.1398514865545, 00:12:17.717 "io_failed": 0, 00:12:17.717 "io_timeout": 0, 00:12:17.717 "avg_latency_us": 14435.234895464857, 00:12:17.717 "min_latency_us": 287.97205240174674, 00:12:17.717 "max_latency_us": 118136.51004366812 00:12:17.717 } 00:12:17.717 ], 00:12:17.717 "core_count": 1 00:12:17.717 } 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:17.717 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:17.718 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:17.718 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:17.718 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:17.718 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:17.718 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:17.718 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:17.718 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:17.718 /dev/nbd0 00:12:17.718 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:17.718 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:17.718 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:17.718 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:17.718 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:17.718 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:17.718 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.978 1+0 records in 00:12:17.978 1+0 records out 00:12:17.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00050823 s, 8.1 MB/s 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:17.978 21:44:40 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:17.978 /dev/nbd1 00:12:17.978 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:17.978 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:17.978 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:17.978 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:17.978 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:17.978 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:17.978 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:17.978 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:17.978 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:17.978 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:17.978 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:17.978 1+0 records in 00:12:17.978 1+0 records out 00:12:17.978 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391917 s, 10.5 MB/s 00:12:17.978 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.238 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:18.238 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.238 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:18.238 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:18.238 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:18.238 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:18.238 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:18.238 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:18.238 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.238 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:18.238 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.238 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:18.238 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.238 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:18.499 /dev/nbd1 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.499 1+0 records in 00:12:18.499 1+0 records out 00:12:18.499 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054162 s, 7.6 MB/s 00:12:18.499 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.759 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:18.759 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.759 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:18.759 21:44:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:18.759 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:18.759 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:18.759 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:18.759 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:18.759 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.759 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:18.759 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.759 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:18.759 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.759 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.019 21:44:41 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 89032 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 89032 ']' 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 89032 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.019 21:44:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89032 00:12:19.279 21:44:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.279 21:44:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.279 21:44:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89032' 00:12:19.279 killing process with pid 89032 00:12:19.279 21:44:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 89032 00:12:19.279 Received shutdown signal, test time was about 9.508443 seconds 00:12:19.280 00:12:19.280 Latency(us) 00:12:19.280 [2024-11-27T21:44:42.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:19.280 [2024-11-27T21:44:42.401Z] =================================================================================================================== 00:12:19.280 [2024-11-27T21:44:42.401Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:19.280 21:44:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 89032 00:12:19.280 [2024-11-27 21:44:42.140659] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.280 [2024-11-27 21:44:42.187062] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.280 21:44:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:19.280 ************************************ 00:12:19.280 END TEST raid_rebuild_test_io 00:12:19.280 ************************************ 00:12:19.280 00:12:19.280 real 0m11.496s 00:12:19.280 user 0m15.016s 00:12:19.280 sys 0m1.665s 00:12:19.280 21:44:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.280 21:44:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.540 21:44:42 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:12:19.540 21:44:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:19.540 21:44:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.540 21:44:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.540 ************************************ 00:12:19.540 START TEST raid_rebuild_test_sb_io 00:12:19.540 ************************************ 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=89431 00:12:19.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 89431 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 89431 ']' 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.540 21:44:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:19.540 [2024-11-27 21:44:42.546684] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:19.540 [2024-11-27 21:44:42.546909] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89431 ] 00:12:19.540 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:19.540 Zero copy mechanism will not be used. 00:12:19.801 [2024-11-27 21:44:42.701529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.801 [2024-11-27 21:44:42.730198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.801 [2024-11-27 21:44:42.773777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.801 [2024-11-27 21:44:42.773904] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.376 BaseBdev1_malloc 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.376 [2024-11-27 21:44:43.406390] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:20.376 [2024-11-27 21:44:43.406504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.376 [2024-11-27 21:44:43.406542] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:20.376 [2024-11-27 21:44:43.406554] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.376 [2024-11-27 21:44:43.408643] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.376 [2024-11-27 21:44:43.408685] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:20.376 BaseBdev1 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.376 BaseBdev2_malloc 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.376 [2024-11-27 21:44:43.434967] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:20.376 [2024-11-27 21:44:43.435016] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.376 [2024-11-27 21:44:43.435037] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:20.376 [2024-11-27 21:44:43.435045] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.376 [2024-11-27 21:44:43.437060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.376 [2024-11-27 21:44:43.437147] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:20.376 BaseBdev2 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.376 BaseBdev3_malloc 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.376 [2024-11-27 21:44:43.463668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:20.376 [2024-11-27 21:44:43.463722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.376 [2024-11-27 21:44:43.463744] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:20.376 [2024-11-27 21:44:43.463754] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.376 [2024-11-27 21:44:43.466025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.376 [2024-11-27 21:44:43.466058] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:20.376 BaseBdev3 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.376 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.651 BaseBdev4_malloc 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.651 [2024-11-27 21:44:43.504158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:20.651 [2024-11-27 21:44:43.504215] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.651 [2024-11-27 21:44:43.504241] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:20.651 [2024-11-27 21:44:43.504250] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.651 [2024-11-27 21:44:43.506373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.651 [2024-11-27 21:44:43.506478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:20.651 BaseBdev4 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.651 spare_malloc 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.651 spare_delay 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.651 [2024-11-27 21:44:43.544648] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:20.651 [2024-11-27 21:44:43.544702] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.651 [2024-11-27 21:44:43.544724] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:12:20.651 [2024-11-27 21:44:43.544733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.651 [2024-11-27 21:44:43.546925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.651 [2024-11-27 21:44:43.546958] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:20.651 spare 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.651 [2024-11-27 21:44:43.556714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.651 [2024-11-27 21:44:43.558526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.651 [2024-11-27 21:44:43.558602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.651 [2024-11-27 21:44:43.558650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:20.651 [2024-11-27 21:44:43.558833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:20.651 [2024-11-27 21:44:43.558845] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:20.651 [2024-11-27 21:44:43.559099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:12:20.651 [2024-11-27 21:44:43.559241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:20.651 [2024-11-27 21:44:43.559252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:20.651 [2024-11-27 21:44:43.559381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.651 "name": "raid_bdev1", 00:12:20.651 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:20.651 "strip_size_kb": 0, 00:12:20.651 "state": "online", 00:12:20.651 "raid_level": "raid1", 00:12:20.651 "superblock": true, 00:12:20.651 "num_base_bdevs": 4, 00:12:20.651 "num_base_bdevs_discovered": 4, 00:12:20.651 "num_base_bdevs_operational": 4, 00:12:20.651 "base_bdevs_list": [ 00:12:20.651 { 00:12:20.651 "name": "BaseBdev1", 00:12:20.651 "uuid": "3b78e2d5-89be-589e-b220-d950c5c4b686", 00:12:20.651 "is_configured": true, 00:12:20.651 "data_offset": 2048, 00:12:20.651 "data_size": 63488 00:12:20.651 }, 00:12:20.651 { 00:12:20.651 "name": "BaseBdev2", 00:12:20.651 "uuid": "6920e4eb-e4ac-55b0-8cf5-87d4202c7db2", 00:12:20.651 "is_configured": true, 00:12:20.651 "data_offset": 2048, 00:12:20.651 "data_size": 63488 00:12:20.651 }, 00:12:20.651 { 00:12:20.651 "name": "BaseBdev3", 00:12:20.651 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:20.651 "is_configured": true, 00:12:20.651 "data_offset": 2048, 00:12:20.651 "data_size": 63488 00:12:20.651 }, 00:12:20.651 { 00:12:20.651 "name": "BaseBdev4", 00:12:20.651 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:20.651 "is_configured": true, 00:12:20.651 "data_offset": 2048, 00:12:20.651 "data_size": 63488 00:12:20.651 } 00:12:20.651 ] 00:12:20.651 }' 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.651 21:44:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.222 [2024-11-27 21:44:44.040307] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.222 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.222 [2024-11-27 21:44:44.119780] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.223 "name": "raid_bdev1", 00:12:21.223 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:21.223 "strip_size_kb": 0, 00:12:21.223 "state": "online", 00:12:21.223 "raid_level": "raid1", 00:12:21.223 "superblock": true, 00:12:21.223 "num_base_bdevs": 4, 00:12:21.223 "num_base_bdevs_discovered": 3, 00:12:21.223 "num_base_bdevs_operational": 3, 00:12:21.223 "base_bdevs_list": [ 00:12:21.223 { 00:12:21.223 "name": null, 00:12:21.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.223 "is_configured": false, 00:12:21.223 "data_offset": 0, 00:12:21.223 "data_size": 63488 00:12:21.223 }, 00:12:21.223 { 00:12:21.223 "name": "BaseBdev2", 00:12:21.223 "uuid": "6920e4eb-e4ac-55b0-8cf5-87d4202c7db2", 00:12:21.223 "is_configured": true, 00:12:21.223 "data_offset": 2048, 00:12:21.223 "data_size": 63488 00:12:21.223 }, 00:12:21.223 { 00:12:21.223 "name": "BaseBdev3", 00:12:21.223 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:21.223 "is_configured": true, 00:12:21.223 "data_offset": 2048, 00:12:21.223 "data_size": 63488 00:12:21.223 }, 00:12:21.223 { 00:12:21.223 "name": "BaseBdev4", 00:12:21.223 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:21.223 "is_configured": true, 00:12:21.223 "data_offset": 2048, 00:12:21.223 "data_size": 63488 00:12:21.223 } 00:12:21.223 ] 00:12:21.223 }' 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.223 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.223 [2024-11-27 21:44:44.209712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:21.223 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:21.223 Zero copy mechanism will not be used. 00:12:21.223 Running I/O for 60 seconds... 00:12:21.483 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:21.483 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.483 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.483 [2024-11-27 21:44:44.577829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:21.743 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.743 21:44:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:21.744 [2024-11-27 21:44:44.622650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002940 00:12:21.744 [2024-11-27 21:44:44.624680] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:21.744 [2024-11-27 21:44:44.733629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:21.744 [2024-11-27 21:44:44.734243] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:21.744 [2024-11-27 21:44:44.857751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:21.744 [2024-11-27 21:44:44.858053] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:22.312 171.00 IOPS, 513.00 MiB/s [2024-11-27T21:44:45.433Z] [2024-11-27 21:44:45.218672] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:22.312 [2024-11-27 21:44:45.220105] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:22.312 [2024-11-27 21:44:45.431784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:22.312 [2024-11-27 21:44:45.432281] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:22.572 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:22.572 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.572 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:22.572 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:22.572 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.572 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.572 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.572 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.572 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.572 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.572 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.572 "name": "raid_bdev1", 00:12:22.572 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:22.572 "strip_size_kb": 0, 00:12:22.572 "state": "online", 00:12:22.572 "raid_level": "raid1", 00:12:22.572 "superblock": true, 00:12:22.572 "num_base_bdevs": 4, 00:12:22.572 "num_base_bdevs_discovered": 4, 00:12:22.572 "num_base_bdevs_operational": 4, 00:12:22.572 "process": { 00:12:22.572 "type": "rebuild", 00:12:22.572 "target": "spare", 00:12:22.572 "progress": { 00:12:22.572 "blocks": 10240, 00:12:22.572 "percent": 16 00:12:22.572 } 00:12:22.572 }, 00:12:22.572 "base_bdevs_list": [ 00:12:22.572 { 00:12:22.572 "name": "spare", 00:12:22.572 "uuid": "f72b400f-75c6-5143-9a8d-096d5dfaaef7", 00:12:22.572 "is_configured": true, 00:12:22.572 "data_offset": 2048, 00:12:22.572 "data_size": 63488 00:12:22.572 }, 00:12:22.572 { 00:12:22.572 "name": "BaseBdev2", 00:12:22.572 "uuid": "6920e4eb-e4ac-55b0-8cf5-87d4202c7db2", 00:12:22.572 "is_configured": true, 00:12:22.572 "data_offset": 2048, 00:12:22.572 "data_size": 63488 00:12:22.572 }, 00:12:22.572 { 00:12:22.572 "name": "BaseBdev3", 00:12:22.572 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:22.572 "is_configured": true, 00:12:22.572 "data_offset": 2048, 00:12:22.572 "data_size": 63488 00:12:22.572 }, 00:12:22.572 { 00:12:22.572 "name": "BaseBdev4", 00:12:22.572 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:22.572 "is_configured": true, 00:12:22.572 "data_offset": 2048, 00:12:22.572 "data_size": 63488 00:12:22.572 } 00:12:22.572 ] 00:12:22.572 }' 00:12:22.572 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.832 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:22.832 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.832 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:22.832 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:22.832 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.832 21:44:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.832 [2024-11-27 21:44:45.748664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:22.832 [2024-11-27 21:44:45.751914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.092 [2024-11-27 21:44:45.958623] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:23.092 [2024-11-27 21:44:45.974296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.092 [2024-11-27 21:44:45.974344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.092 [2024-11-27 21:44:45.974359] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:23.092 [2024-11-27 21:44:45.992363] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000002870 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.092 "name": "raid_bdev1", 00:12:23.092 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:23.092 "strip_size_kb": 0, 00:12:23.092 "state": "online", 00:12:23.092 "raid_level": "raid1", 00:12:23.092 "superblock": true, 00:12:23.092 "num_base_bdevs": 4, 00:12:23.092 "num_base_bdevs_discovered": 3, 00:12:23.092 "num_base_bdevs_operational": 3, 00:12:23.092 "base_bdevs_list": [ 00:12:23.092 { 00:12:23.092 "name": null, 00:12:23.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.092 "is_configured": false, 00:12:23.092 "data_offset": 0, 00:12:23.092 "data_size": 63488 00:12:23.092 }, 00:12:23.092 { 00:12:23.092 "name": "BaseBdev2", 00:12:23.092 "uuid": "6920e4eb-e4ac-55b0-8cf5-87d4202c7db2", 00:12:23.092 "is_configured": true, 00:12:23.092 "data_offset": 2048, 00:12:23.092 "data_size": 63488 00:12:23.092 }, 00:12:23.092 { 00:12:23.092 "name": "BaseBdev3", 00:12:23.092 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:23.092 "is_configured": true, 00:12:23.092 "data_offset": 2048, 00:12:23.092 "data_size": 63488 00:12:23.092 }, 00:12:23.092 { 00:12:23.092 "name": "BaseBdev4", 00:12:23.092 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:23.092 "is_configured": true, 00:12:23.092 "data_offset": 2048, 00:12:23.092 "data_size": 63488 00:12:23.092 } 00:12:23.092 ] 00:12:23.092 }' 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.092 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.352 171.00 IOPS, 513.00 MiB/s [2024-11-27T21:44:46.473Z] 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:23.352 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.352 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:23.352 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:23.352 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.352 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.352 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.352 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.352 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.352 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.612 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.612 "name": "raid_bdev1", 00:12:23.612 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:23.612 "strip_size_kb": 0, 00:12:23.612 "state": "online", 00:12:23.612 "raid_level": "raid1", 00:12:23.612 "superblock": true, 00:12:23.612 "num_base_bdevs": 4, 00:12:23.612 "num_base_bdevs_discovered": 3, 00:12:23.612 "num_base_bdevs_operational": 3, 00:12:23.612 "base_bdevs_list": [ 00:12:23.612 { 00:12:23.612 "name": null, 00:12:23.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.612 "is_configured": false, 00:12:23.612 "data_offset": 0, 00:12:23.612 "data_size": 63488 00:12:23.612 }, 00:12:23.612 { 00:12:23.612 "name": "BaseBdev2", 00:12:23.612 "uuid": "6920e4eb-e4ac-55b0-8cf5-87d4202c7db2", 00:12:23.612 "is_configured": true, 00:12:23.612 "data_offset": 2048, 00:12:23.612 "data_size": 63488 00:12:23.612 }, 00:12:23.612 { 00:12:23.612 "name": "BaseBdev3", 00:12:23.612 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:23.612 "is_configured": true, 00:12:23.612 "data_offset": 2048, 00:12:23.612 "data_size": 63488 00:12:23.612 }, 00:12:23.612 { 00:12:23.612 "name": "BaseBdev4", 00:12:23.612 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:23.612 "is_configured": true, 00:12:23.612 "data_offset": 2048, 00:12:23.612 "data_size": 63488 00:12:23.612 } 00:12:23.612 ] 00:12:23.612 }' 00:12:23.612 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.612 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:23.612 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.612 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:23.612 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:23.612 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.612 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.612 [2024-11-27 21:44:46.584590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:23.612 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.612 21:44:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:23.612 [2024-11-27 21:44:46.630248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:12:23.612 [2024-11-27 21:44:46.632377] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:23.872 [2024-11-27 21:44:46.748795] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:23.872 [2024-11-27 21:44:46.749215] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:23.872 [2024-11-27 21:44:46.965954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:23.872 [2024-11-27 21:44:46.966330] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:24.131 171.33 IOPS, 514.00 MiB/s [2024-11-27T21:44:47.252Z] [2024-11-27 21:44:47.213727] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:24.391 [2024-11-27 21:44:47.329282] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:24.391 [2024-11-27 21:44:47.329899] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:24.650 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:24.650 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.651 "name": "raid_bdev1", 00:12:24.651 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:24.651 "strip_size_kb": 0, 00:12:24.651 "state": "online", 00:12:24.651 "raid_level": "raid1", 00:12:24.651 "superblock": true, 00:12:24.651 "num_base_bdevs": 4, 00:12:24.651 "num_base_bdevs_discovered": 4, 00:12:24.651 "num_base_bdevs_operational": 4, 00:12:24.651 "process": { 00:12:24.651 "type": "rebuild", 00:12:24.651 "target": "spare", 00:12:24.651 "progress": { 00:12:24.651 "blocks": 12288, 00:12:24.651 "percent": 19 00:12:24.651 } 00:12:24.651 }, 00:12:24.651 "base_bdevs_list": [ 00:12:24.651 { 00:12:24.651 "name": "spare", 00:12:24.651 "uuid": "f72b400f-75c6-5143-9a8d-096d5dfaaef7", 00:12:24.651 "is_configured": true, 00:12:24.651 "data_offset": 2048, 00:12:24.651 "data_size": 63488 00:12:24.651 }, 00:12:24.651 { 00:12:24.651 "name": "BaseBdev2", 00:12:24.651 "uuid": "6920e4eb-e4ac-55b0-8cf5-87d4202c7db2", 00:12:24.651 "is_configured": true, 00:12:24.651 "data_offset": 2048, 00:12:24.651 "data_size": 63488 00:12:24.651 }, 00:12:24.651 { 00:12:24.651 "name": "BaseBdev3", 00:12:24.651 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:24.651 "is_configured": true, 00:12:24.651 "data_offset": 2048, 00:12:24.651 "data_size": 63488 00:12:24.651 }, 00:12:24.651 { 00:12:24.651 "name": "BaseBdev4", 00:12:24.651 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:24.651 "is_configured": true, 00:12:24.651 "data_offset": 2048, 00:12:24.651 "data_size": 63488 00:12:24.651 } 00:12:24.651 ] 00:12:24.651 }' 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.651 [2024-11-27 21:44:47.669445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:24.651 [2024-11-27 21:44:47.669880] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:24.651 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.651 21:44:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:24.651 [2024-11-27 21:44:47.769556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:24.910 [2024-11-27 21:44:47.894759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:24.910 [2024-11-27 21:44:47.895466] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:25.170 [2024-11-27 21:44:48.097406] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002870 00:12:25.170 [2024-11-27 21:44:48.097505] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000002a10 00:12:25.170 [2024-11-27 21:44:48.104584] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:25.170 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.170 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:12:25.170 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:12:25.170 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.170 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.170 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.170 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.170 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.170 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.170 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.170 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.170 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.170 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.170 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.170 "name": "raid_bdev1", 00:12:25.170 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:25.170 "strip_size_kb": 0, 00:12:25.171 "state": "online", 00:12:25.171 "raid_level": "raid1", 00:12:25.171 "superblock": true, 00:12:25.171 "num_base_bdevs": 4, 00:12:25.171 "num_base_bdevs_discovered": 3, 00:12:25.171 "num_base_bdevs_operational": 3, 00:12:25.171 "process": { 00:12:25.171 "type": "rebuild", 00:12:25.171 "target": "spare", 00:12:25.171 "progress": { 00:12:25.171 "blocks": 16384, 00:12:25.171 "percent": 25 00:12:25.171 } 00:12:25.171 }, 00:12:25.171 "base_bdevs_list": [ 00:12:25.171 { 00:12:25.171 "name": "spare", 00:12:25.171 "uuid": "f72b400f-75c6-5143-9a8d-096d5dfaaef7", 00:12:25.171 "is_configured": true, 00:12:25.171 "data_offset": 2048, 00:12:25.171 "data_size": 63488 00:12:25.171 }, 00:12:25.171 { 00:12:25.171 "name": null, 00:12:25.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.171 "is_configured": false, 00:12:25.171 "data_offset": 0, 00:12:25.171 "data_size": 63488 00:12:25.171 }, 00:12:25.171 { 00:12:25.171 "name": "BaseBdev3", 00:12:25.171 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:25.171 "is_configured": true, 00:12:25.171 "data_offset": 2048, 00:12:25.171 "data_size": 63488 00:12:25.171 }, 00:12:25.171 { 00:12:25.171 "name": "BaseBdev4", 00:12:25.171 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:25.171 "is_configured": true, 00:12:25.171 "data_offset": 2048, 00:12:25.171 "data_size": 63488 00:12:25.171 } 00:12:25.171 ] 00:12:25.171 }' 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.171 145.75 IOPS, 437.25 MiB/s [2024-11-27T21:44:48.292Z] 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=397 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.171 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.431 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.431 "name": "raid_bdev1", 00:12:25.431 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:25.431 "strip_size_kb": 0, 00:12:25.431 "state": "online", 00:12:25.431 "raid_level": "raid1", 00:12:25.431 "superblock": true, 00:12:25.431 "num_base_bdevs": 4, 00:12:25.431 "num_base_bdevs_discovered": 3, 00:12:25.431 "num_base_bdevs_operational": 3, 00:12:25.431 "process": { 00:12:25.431 "type": "rebuild", 00:12:25.431 "target": "spare", 00:12:25.431 "progress": { 00:12:25.431 "blocks": 16384, 00:12:25.431 "percent": 25 00:12:25.431 } 00:12:25.431 }, 00:12:25.431 "base_bdevs_list": [ 00:12:25.431 { 00:12:25.431 "name": "spare", 00:12:25.431 "uuid": "f72b400f-75c6-5143-9a8d-096d5dfaaef7", 00:12:25.431 "is_configured": true, 00:12:25.431 "data_offset": 2048, 00:12:25.431 "data_size": 63488 00:12:25.431 }, 00:12:25.431 { 00:12:25.431 "name": null, 00:12:25.431 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.431 "is_configured": false, 00:12:25.431 "data_offset": 0, 00:12:25.431 "data_size": 63488 00:12:25.431 }, 00:12:25.431 { 00:12:25.431 "name": "BaseBdev3", 00:12:25.431 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:25.431 "is_configured": true, 00:12:25.431 "data_offset": 2048, 00:12:25.431 "data_size": 63488 00:12:25.431 }, 00:12:25.431 { 00:12:25.431 "name": "BaseBdev4", 00:12:25.431 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:25.431 "is_configured": true, 00:12:25.431 "data_offset": 2048, 00:12:25.431 "data_size": 63488 00:12:25.431 } 00:12:25.431 ] 00:12:25.431 }' 00:12:25.431 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.431 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.431 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.431 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.431 21:44:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:25.431 [2024-11-27 21:44:48.449090] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:25.691 [2024-11-27 21:44:48.668314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:25.691 [2024-11-27 21:44:48.668710] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:25.950 [2024-11-27 21:44:49.000857] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:26.212 [2024-11-27 21:44:49.140246] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:26.473 132.40 IOPS, 397.20 MiB/s [2024-11-27T21:44:49.594Z] 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.473 "name": "raid_bdev1", 00:12:26.473 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:26.473 "strip_size_kb": 0, 00:12:26.473 "state": "online", 00:12:26.473 "raid_level": "raid1", 00:12:26.473 "superblock": true, 00:12:26.473 "num_base_bdevs": 4, 00:12:26.473 "num_base_bdevs_discovered": 3, 00:12:26.473 "num_base_bdevs_operational": 3, 00:12:26.473 "process": { 00:12:26.473 "type": "rebuild", 00:12:26.473 "target": "spare", 00:12:26.473 "progress": { 00:12:26.473 "blocks": 32768, 00:12:26.473 "percent": 51 00:12:26.473 } 00:12:26.473 }, 00:12:26.473 "base_bdevs_list": [ 00:12:26.473 { 00:12:26.473 "name": "spare", 00:12:26.473 "uuid": "f72b400f-75c6-5143-9a8d-096d5dfaaef7", 00:12:26.473 "is_configured": true, 00:12:26.473 "data_offset": 2048, 00:12:26.473 "data_size": 63488 00:12:26.473 }, 00:12:26.473 { 00:12:26.473 "name": null, 00:12:26.473 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.473 "is_configured": false, 00:12:26.473 "data_offset": 0, 00:12:26.473 "data_size": 63488 00:12:26.473 }, 00:12:26.473 { 00:12:26.473 "name": "BaseBdev3", 00:12:26.473 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:26.473 "is_configured": true, 00:12:26.473 "data_offset": 2048, 00:12:26.473 "data_size": 63488 00:12:26.473 }, 00:12:26.473 { 00:12:26.473 "name": "BaseBdev4", 00:12:26.473 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:26.473 "is_configured": true, 00:12:26.473 "data_offset": 2048, 00:12:26.473 "data_size": 63488 00:12:26.473 } 00:12:26.473 ] 00:12:26.473 }' 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.473 [2024-11-27 21:44:49.491923] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:26.473 [2024-11-27 21:44:49.492454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.473 21:44:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:26.732 [2024-11-27 21:44:49.826998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:12:27.301 120.50 IOPS, 361.50 MiB/s [2024-11-27T21:44:50.422Z] [2024-11-27 21:44:50.244552] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:27.301 [2024-11-27 21:44:50.379017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.560 "name": "raid_bdev1", 00:12:27.560 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:27.560 "strip_size_kb": 0, 00:12:27.560 "state": "online", 00:12:27.560 "raid_level": "raid1", 00:12:27.560 "superblock": true, 00:12:27.560 "num_base_bdevs": 4, 00:12:27.560 "num_base_bdevs_discovered": 3, 00:12:27.560 "num_base_bdevs_operational": 3, 00:12:27.560 "process": { 00:12:27.560 "type": "rebuild", 00:12:27.560 "target": "spare", 00:12:27.560 "progress": { 00:12:27.560 "blocks": 49152, 00:12:27.560 "percent": 77 00:12:27.560 } 00:12:27.560 }, 00:12:27.560 "base_bdevs_list": [ 00:12:27.560 { 00:12:27.560 "name": "spare", 00:12:27.560 "uuid": "f72b400f-75c6-5143-9a8d-096d5dfaaef7", 00:12:27.560 "is_configured": true, 00:12:27.560 "data_offset": 2048, 00:12:27.560 "data_size": 63488 00:12:27.560 }, 00:12:27.560 { 00:12:27.560 "name": null, 00:12:27.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.560 "is_configured": false, 00:12:27.560 "data_offset": 0, 00:12:27.560 "data_size": 63488 00:12:27.560 }, 00:12:27.560 { 00:12:27.560 "name": "BaseBdev3", 00:12:27.560 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:27.560 "is_configured": true, 00:12:27.560 "data_offset": 2048, 00:12:27.560 "data_size": 63488 00:12:27.560 }, 00:12:27.560 { 00:12:27.560 "name": "BaseBdev4", 00:12:27.560 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:27.560 "is_configured": true, 00:12:27.560 "data_offset": 2048, 00:12:27.560 "data_size": 63488 00:12:27.560 } 00:12:27.560 ] 00:12:27.560 }' 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.560 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.820 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.820 21:44:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:27.820 [2024-11-27 21:44:50.714467] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:28.079 [2024-11-27 21:44:51.148184] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:28.079 [2024-11-27 21:44:51.148464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:28.599 110.00 IOPS, 330.00 MiB/s [2024-11-27T21:44:51.720Z] [2024-11-27 21:44:51.465058] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:28.599 [2024-11-27 21:44:51.570287] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:28.599 [2024-11-27 21:44:51.572514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.599 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.599 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.599 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.599 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.599 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.599 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.599 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.599 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.599 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.599 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.858 "name": "raid_bdev1", 00:12:28.858 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:28.858 "strip_size_kb": 0, 00:12:28.858 "state": "online", 00:12:28.858 "raid_level": "raid1", 00:12:28.858 "superblock": true, 00:12:28.858 "num_base_bdevs": 4, 00:12:28.858 "num_base_bdevs_discovered": 3, 00:12:28.858 "num_base_bdevs_operational": 3, 00:12:28.858 "base_bdevs_list": [ 00:12:28.858 { 00:12:28.858 "name": "spare", 00:12:28.858 "uuid": "f72b400f-75c6-5143-9a8d-096d5dfaaef7", 00:12:28.858 "is_configured": true, 00:12:28.858 "data_offset": 2048, 00:12:28.858 "data_size": 63488 00:12:28.858 }, 00:12:28.858 { 00:12:28.858 "name": null, 00:12:28.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.858 "is_configured": false, 00:12:28.858 "data_offset": 0, 00:12:28.858 "data_size": 63488 00:12:28.858 }, 00:12:28.858 { 00:12:28.858 "name": "BaseBdev3", 00:12:28.858 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:28.858 "is_configured": true, 00:12:28.858 "data_offset": 2048, 00:12:28.858 "data_size": 63488 00:12:28.858 }, 00:12:28.858 { 00:12:28.858 "name": "BaseBdev4", 00:12:28.858 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:28.858 "is_configured": true, 00:12:28.858 "data_offset": 2048, 00:12:28.858 "data_size": 63488 00:12:28.858 } 00:12:28.858 ] 00:12:28.858 }' 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.858 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.858 "name": "raid_bdev1", 00:12:28.858 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:28.858 "strip_size_kb": 0, 00:12:28.858 "state": "online", 00:12:28.858 "raid_level": "raid1", 00:12:28.858 "superblock": true, 00:12:28.858 "num_base_bdevs": 4, 00:12:28.858 "num_base_bdevs_discovered": 3, 00:12:28.858 "num_base_bdevs_operational": 3, 00:12:28.858 "base_bdevs_list": [ 00:12:28.858 { 00:12:28.858 "name": "spare", 00:12:28.858 "uuid": "f72b400f-75c6-5143-9a8d-096d5dfaaef7", 00:12:28.858 "is_configured": true, 00:12:28.859 "data_offset": 2048, 00:12:28.859 "data_size": 63488 00:12:28.859 }, 00:12:28.859 { 00:12:28.859 "name": null, 00:12:28.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.859 "is_configured": false, 00:12:28.859 "data_offset": 0, 00:12:28.859 "data_size": 63488 00:12:28.859 }, 00:12:28.859 { 00:12:28.859 "name": "BaseBdev3", 00:12:28.859 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:28.859 "is_configured": true, 00:12:28.859 "data_offset": 2048, 00:12:28.859 "data_size": 63488 00:12:28.859 }, 00:12:28.859 { 00:12:28.859 "name": "BaseBdev4", 00:12:28.859 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:28.859 "is_configured": true, 00:12:28.859 "data_offset": 2048, 00:12:28.859 "data_size": 63488 00:12:28.859 } 00:12:28.859 ] 00:12:28.859 }' 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.859 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.118 21:44:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.118 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.118 "name": "raid_bdev1", 00:12:29.118 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:29.118 "strip_size_kb": 0, 00:12:29.118 "state": "online", 00:12:29.118 "raid_level": "raid1", 00:12:29.118 "superblock": true, 00:12:29.118 "num_base_bdevs": 4, 00:12:29.118 "num_base_bdevs_discovered": 3, 00:12:29.118 "num_base_bdevs_operational": 3, 00:12:29.118 "base_bdevs_list": [ 00:12:29.118 { 00:12:29.118 "name": "spare", 00:12:29.118 "uuid": "f72b400f-75c6-5143-9a8d-096d5dfaaef7", 00:12:29.118 "is_configured": true, 00:12:29.118 "data_offset": 2048, 00:12:29.118 "data_size": 63488 00:12:29.118 }, 00:12:29.118 { 00:12:29.118 "name": null, 00:12:29.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.118 "is_configured": false, 00:12:29.118 "data_offset": 0, 00:12:29.118 "data_size": 63488 00:12:29.118 }, 00:12:29.118 { 00:12:29.118 "name": "BaseBdev3", 00:12:29.118 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:29.118 "is_configured": true, 00:12:29.118 "data_offset": 2048, 00:12:29.118 "data_size": 63488 00:12:29.118 }, 00:12:29.118 { 00:12:29.118 "name": "BaseBdev4", 00:12:29.118 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:29.118 "is_configured": true, 00:12:29.118 "data_offset": 2048, 00:12:29.118 "data_size": 63488 00:12:29.118 } 00:12:29.118 ] 00:12:29.118 }' 00:12:29.118 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.118 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.380 99.88 IOPS, 299.62 MiB/s [2024-11-27T21:44:52.501Z] 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.380 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.380 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.380 [2024-11-27 21:44:52.384428] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.380 [2024-11-27 21:44:52.384509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.380 00:12:29.380 Latency(us) 00:12:29.380 [2024-11-27T21:44:52.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.380 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:29.380 raid_bdev1 : 8.24 98.47 295.40 0.00 0.00 12731.72 287.97 114931.26 00:12:29.380 [2024-11-27T21:44:52.501Z] =================================================================================================================== 00:12:29.380 [2024-11-27T21:44:52.501Z] Total : 98.47 295.40 0.00 0.00 12731.72 287.97 114931.26 00:12:29.380 [2024-11-27 21:44:52.435975] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.380 [2024-11-27 21:44:52.436101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.380 [2024-11-27 21:44:52.436245] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.380 [2024-11-27 21:44:52.436306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:29.380 { 00:12:29.380 "results": [ 00:12:29.380 { 00:12:29.380 "job": "raid_bdev1", 00:12:29.380 "core_mask": "0x1", 00:12:29.380 "workload": "randrw", 00:12:29.380 "percentage": 50, 00:12:29.380 "status": "finished", 00:12:29.380 "queue_depth": 2, 00:12:29.380 "io_size": 3145728, 00:12:29.380 "runtime": 8.236211, 00:12:29.380 "iops": 98.46760846704875, 00:12:29.380 "mibps": 295.40282540114623, 00:12:29.380 "io_failed": 0, 00:12:29.380 "io_timeout": 0, 00:12:29.380 "avg_latency_us": 12731.715505683318, 00:12:29.380 "min_latency_us": 287.97205240174674, 00:12:29.380 "max_latency_us": 114931.2558951965 00:12:29.380 } 00:12:29.380 ], 00:12:29.380 "core_count": 1 00:12:29.380 } 00:12:29.380 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.380 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.380 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.380 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.381 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:29.661 /dev/nbd0 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.661 1+0 records in 00:12:29.661 1+0 records out 00:12:29.661 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541995 s, 7.6 MB/s 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.661 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:12:29.937 /dev/nbd1 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.937 1+0 records in 00:12:29.937 1+0 records out 00:12:29.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207258 s, 19.8 MB/s 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.937 21:44:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:29.937 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:29.937 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.937 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:29.937 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.937 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:29.937 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.937 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.197 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:12:30.457 /dev/nbd1 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.457 1+0 records in 00:12:30.457 1+0 records out 00:12:30.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054003 s, 7.6 MB/s 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.457 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.717 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.977 21:44:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.977 [2024-11-27 21:44:54.005414] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:30.977 [2024-11-27 21:44:54.005469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.977 [2024-11-27 21:44:54.005491] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:30.977 [2024-11-27 21:44:54.005499] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.977 [2024-11-27 21:44:54.007702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.977 [2024-11-27 21:44:54.007739] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:30.977 [2024-11-27 21:44:54.007840] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:30.977 [2024-11-27 21:44:54.007879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.977 [2024-11-27 21:44:54.008003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:30.977 [2024-11-27 21:44:54.008097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:30.977 spare 00:12:30.977 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.977 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:30.977 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.977 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.236 [2024-11-27 21:44:54.108004] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:12:31.237 [2024-11-27 21:44:54.108037] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:31.237 [2024-11-27 21:44:54.108345] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000337b0 00:12:31.237 [2024-11-27 21:44:54.108510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:12:31.237 [2024-11-27 21:44:54.108526] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:12:31.237 [2024-11-27 21:44:54.108669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.237 "name": "raid_bdev1", 00:12:31.237 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:31.237 "strip_size_kb": 0, 00:12:31.237 "state": "online", 00:12:31.237 "raid_level": "raid1", 00:12:31.237 "superblock": true, 00:12:31.237 "num_base_bdevs": 4, 00:12:31.237 "num_base_bdevs_discovered": 3, 00:12:31.237 "num_base_bdevs_operational": 3, 00:12:31.237 "base_bdevs_list": [ 00:12:31.237 { 00:12:31.237 "name": "spare", 00:12:31.237 "uuid": "f72b400f-75c6-5143-9a8d-096d5dfaaef7", 00:12:31.237 "is_configured": true, 00:12:31.237 "data_offset": 2048, 00:12:31.237 "data_size": 63488 00:12:31.237 }, 00:12:31.237 { 00:12:31.237 "name": null, 00:12:31.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.237 "is_configured": false, 00:12:31.237 "data_offset": 2048, 00:12:31.237 "data_size": 63488 00:12:31.237 }, 00:12:31.237 { 00:12:31.237 "name": "BaseBdev3", 00:12:31.237 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:31.237 "is_configured": true, 00:12:31.237 "data_offset": 2048, 00:12:31.237 "data_size": 63488 00:12:31.237 }, 00:12:31.237 { 00:12:31.237 "name": "BaseBdev4", 00:12:31.237 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:31.237 "is_configured": true, 00:12:31.237 "data_offset": 2048, 00:12:31.237 "data_size": 63488 00:12:31.237 } 00:12:31.237 ] 00:12:31.237 }' 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.237 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.496 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:31.496 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.496 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:31.496 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:31.496 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.496 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.496 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.496 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.496 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.496 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.496 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.496 "name": "raid_bdev1", 00:12:31.496 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:31.496 "strip_size_kb": 0, 00:12:31.496 "state": "online", 00:12:31.496 "raid_level": "raid1", 00:12:31.496 "superblock": true, 00:12:31.496 "num_base_bdevs": 4, 00:12:31.496 "num_base_bdevs_discovered": 3, 00:12:31.496 "num_base_bdevs_operational": 3, 00:12:31.496 "base_bdevs_list": [ 00:12:31.496 { 00:12:31.496 "name": "spare", 00:12:31.496 "uuid": "f72b400f-75c6-5143-9a8d-096d5dfaaef7", 00:12:31.496 "is_configured": true, 00:12:31.496 "data_offset": 2048, 00:12:31.496 "data_size": 63488 00:12:31.496 }, 00:12:31.496 { 00:12:31.496 "name": null, 00:12:31.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.496 "is_configured": false, 00:12:31.497 "data_offset": 2048, 00:12:31.497 "data_size": 63488 00:12:31.497 }, 00:12:31.497 { 00:12:31.497 "name": "BaseBdev3", 00:12:31.497 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:31.497 "is_configured": true, 00:12:31.497 "data_offset": 2048, 00:12:31.497 "data_size": 63488 00:12:31.497 }, 00:12:31.497 { 00:12:31.497 "name": "BaseBdev4", 00:12:31.497 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:31.497 "is_configured": true, 00:12:31.497 "data_offset": 2048, 00:12:31.497 "data_size": 63488 00:12:31.497 } 00:12:31.497 ] 00:12:31.497 }' 00:12:31.497 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.497 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:31.497 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.756 [2024-11-27 21:44:54.680332] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.756 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.756 "name": "raid_bdev1", 00:12:31.756 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:31.756 "strip_size_kb": 0, 00:12:31.756 "state": "online", 00:12:31.756 "raid_level": "raid1", 00:12:31.756 "superblock": true, 00:12:31.756 "num_base_bdevs": 4, 00:12:31.756 "num_base_bdevs_discovered": 2, 00:12:31.756 "num_base_bdevs_operational": 2, 00:12:31.757 "base_bdevs_list": [ 00:12:31.757 { 00:12:31.757 "name": null, 00:12:31.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.757 "is_configured": false, 00:12:31.757 "data_offset": 0, 00:12:31.757 "data_size": 63488 00:12:31.757 }, 00:12:31.757 { 00:12:31.757 "name": null, 00:12:31.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.757 "is_configured": false, 00:12:31.757 "data_offset": 2048, 00:12:31.757 "data_size": 63488 00:12:31.757 }, 00:12:31.757 { 00:12:31.757 "name": "BaseBdev3", 00:12:31.757 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:31.757 "is_configured": true, 00:12:31.757 "data_offset": 2048, 00:12:31.757 "data_size": 63488 00:12:31.757 }, 00:12:31.757 { 00:12:31.757 "name": "BaseBdev4", 00:12:31.757 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:31.757 "is_configured": true, 00:12:31.757 "data_offset": 2048, 00:12:31.757 "data_size": 63488 00:12:31.757 } 00:12:31.757 ] 00:12:31.757 }' 00:12:31.757 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.757 21:44:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.015 21:44:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:32.015 21:44:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.015 21:44:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.015 [2024-11-27 21:44:55.107806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.015 [2024-11-27 21:44:55.108014] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:32.015 [2024-11-27 21:44:55.108029] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:32.015 [2024-11-27 21:44:55.108091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.015 [2024-11-27 21:44:55.112564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033880 00:12:32.015 21:44:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.015 21:44:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:32.015 [2024-11-27 21:44:55.114596] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.394 "name": "raid_bdev1", 00:12:33.394 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:33.394 "strip_size_kb": 0, 00:12:33.394 "state": "online", 00:12:33.394 "raid_level": "raid1", 00:12:33.394 "superblock": true, 00:12:33.394 "num_base_bdevs": 4, 00:12:33.394 "num_base_bdevs_discovered": 3, 00:12:33.394 "num_base_bdevs_operational": 3, 00:12:33.394 "process": { 00:12:33.394 "type": "rebuild", 00:12:33.394 "target": "spare", 00:12:33.394 "progress": { 00:12:33.394 "blocks": 20480, 00:12:33.394 "percent": 32 00:12:33.394 } 00:12:33.394 }, 00:12:33.394 "base_bdevs_list": [ 00:12:33.394 { 00:12:33.394 "name": "spare", 00:12:33.394 "uuid": "f72b400f-75c6-5143-9a8d-096d5dfaaef7", 00:12:33.394 "is_configured": true, 00:12:33.394 "data_offset": 2048, 00:12:33.394 "data_size": 63488 00:12:33.394 }, 00:12:33.394 { 00:12:33.394 "name": null, 00:12:33.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.394 "is_configured": false, 00:12:33.394 "data_offset": 2048, 00:12:33.394 "data_size": 63488 00:12:33.394 }, 00:12:33.394 { 00:12:33.394 "name": "BaseBdev3", 00:12:33.394 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:33.394 "is_configured": true, 00:12:33.394 "data_offset": 2048, 00:12:33.394 "data_size": 63488 00:12:33.394 }, 00:12:33.394 { 00:12:33.394 "name": "BaseBdev4", 00:12:33.394 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:33.394 "is_configured": true, 00:12:33.394 "data_offset": 2048, 00:12:33.394 "data_size": 63488 00:12:33.394 } 00:12:33.394 ] 00:12:33.394 }' 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.394 [2024-11-27 21:44:56.251150] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.394 [2024-11-27 21:44:56.318971] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:33.394 [2024-11-27 21:44:56.319028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.394 [2024-11-27 21:44:56.319045] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:33.394 [2024-11-27 21:44:56.319052] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.394 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.395 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.395 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.395 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.395 "name": "raid_bdev1", 00:12:33.395 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:33.395 "strip_size_kb": 0, 00:12:33.395 "state": "online", 00:12:33.395 "raid_level": "raid1", 00:12:33.395 "superblock": true, 00:12:33.395 "num_base_bdevs": 4, 00:12:33.395 "num_base_bdevs_discovered": 2, 00:12:33.395 "num_base_bdevs_operational": 2, 00:12:33.395 "base_bdevs_list": [ 00:12:33.395 { 00:12:33.395 "name": null, 00:12:33.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.395 "is_configured": false, 00:12:33.395 "data_offset": 0, 00:12:33.395 "data_size": 63488 00:12:33.395 }, 00:12:33.395 { 00:12:33.395 "name": null, 00:12:33.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.395 "is_configured": false, 00:12:33.395 "data_offset": 2048, 00:12:33.395 "data_size": 63488 00:12:33.395 }, 00:12:33.395 { 00:12:33.395 "name": "BaseBdev3", 00:12:33.395 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:33.395 "is_configured": true, 00:12:33.395 "data_offset": 2048, 00:12:33.395 "data_size": 63488 00:12:33.395 }, 00:12:33.395 { 00:12:33.395 "name": "BaseBdev4", 00:12:33.395 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:33.395 "is_configured": true, 00:12:33.395 "data_offset": 2048, 00:12:33.395 "data_size": 63488 00:12:33.395 } 00:12:33.395 ] 00:12:33.395 }' 00:12:33.395 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.395 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.964 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:33.964 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.964 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.964 [2024-11-27 21:44:56.782903] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:33.964 [2024-11-27 21:44:56.783003] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.964 [2024-11-27 21:44:56.783050] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:33.964 [2024-11-27 21:44:56.783077] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.964 [2024-11-27 21:44:56.783586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.964 [2024-11-27 21:44:56.783646] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:33.964 [2024-11-27 21:44:56.783784] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:33.964 [2024-11-27 21:44:56.783843] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:12:33.964 [2024-11-27 21:44:56.783896] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:33.964 [2024-11-27 21:44:56.783980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.964 [2024-11-27 21:44:56.788468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000033950 00:12:33.964 spare 00:12:33.964 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.964 21:44:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:33.964 [2024-11-27 21:44:56.790358] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:34.901 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.901 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.901 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.901 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.901 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.901 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.901 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.901 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.901 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.901 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.901 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.901 "name": "raid_bdev1", 00:12:34.901 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:34.901 "strip_size_kb": 0, 00:12:34.901 "state": "online", 00:12:34.901 "raid_level": "raid1", 00:12:34.901 "superblock": true, 00:12:34.901 "num_base_bdevs": 4, 00:12:34.901 "num_base_bdevs_discovered": 3, 00:12:34.901 "num_base_bdevs_operational": 3, 00:12:34.901 "process": { 00:12:34.901 "type": "rebuild", 00:12:34.901 "target": "spare", 00:12:34.901 "progress": { 00:12:34.901 "blocks": 20480, 00:12:34.901 "percent": 32 00:12:34.901 } 00:12:34.901 }, 00:12:34.901 "base_bdevs_list": [ 00:12:34.901 { 00:12:34.902 "name": "spare", 00:12:34.902 "uuid": "f72b400f-75c6-5143-9a8d-096d5dfaaef7", 00:12:34.902 "is_configured": true, 00:12:34.902 "data_offset": 2048, 00:12:34.902 "data_size": 63488 00:12:34.902 }, 00:12:34.902 { 00:12:34.902 "name": null, 00:12:34.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.902 "is_configured": false, 00:12:34.902 "data_offset": 2048, 00:12:34.902 "data_size": 63488 00:12:34.902 }, 00:12:34.902 { 00:12:34.902 "name": "BaseBdev3", 00:12:34.902 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:34.902 "is_configured": true, 00:12:34.902 "data_offset": 2048, 00:12:34.902 "data_size": 63488 00:12:34.902 }, 00:12:34.902 { 00:12:34.902 "name": "BaseBdev4", 00:12:34.902 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:34.902 "is_configured": true, 00:12:34.902 "data_offset": 2048, 00:12:34.902 "data_size": 63488 00:12:34.902 } 00:12:34.902 ] 00:12:34.902 }' 00:12:34.902 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.902 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.902 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.902 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.902 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:34.902 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.902 21:44:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.902 [2024-11-27 21:44:57.926615] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.902 [2024-11-27 21:44:57.994909] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:34.902 [2024-11-27 21:44:57.995029] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.902 [2024-11-27 21:44:57.995064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:34.902 [2024-11-27 21:44:57.995087] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.902 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.161 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.161 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.161 "name": "raid_bdev1", 00:12:35.161 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:35.161 "strip_size_kb": 0, 00:12:35.161 "state": "online", 00:12:35.161 "raid_level": "raid1", 00:12:35.161 "superblock": true, 00:12:35.161 "num_base_bdevs": 4, 00:12:35.161 "num_base_bdevs_discovered": 2, 00:12:35.161 "num_base_bdevs_operational": 2, 00:12:35.161 "base_bdevs_list": [ 00:12:35.161 { 00:12:35.161 "name": null, 00:12:35.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.161 "is_configured": false, 00:12:35.161 "data_offset": 0, 00:12:35.161 "data_size": 63488 00:12:35.161 }, 00:12:35.161 { 00:12:35.161 "name": null, 00:12:35.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.161 "is_configured": false, 00:12:35.161 "data_offset": 2048, 00:12:35.161 "data_size": 63488 00:12:35.161 }, 00:12:35.161 { 00:12:35.161 "name": "BaseBdev3", 00:12:35.161 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:35.161 "is_configured": true, 00:12:35.161 "data_offset": 2048, 00:12:35.161 "data_size": 63488 00:12:35.161 }, 00:12:35.161 { 00:12:35.161 "name": "BaseBdev4", 00:12:35.161 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:35.161 "is_configured": true, 00:12:35.161 "data_offset": 2048, 00:12:35.161 "data_size": 63488 00:12:35.161 } 00:12:35.161 ] 00:12:35.161 }' 00:12:35.161 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.161 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.421 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.421 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.421 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.421 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.421 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.421 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.421 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.421 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.421 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.421 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.421 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.421 "name": "raid_bdev1", 00:12:35.421 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:35.421 "strip_size_kb": 0, 00:12:35.421 "state": "online", 00:12:35.421 "raid_level": "raid1", 00:12:35.421 "superblock": true, 00:12:35.421 "num_base_bdevs": 4, 00:12:35.421 "num_base_bdevs_discovered": 2, 00:12:35.421 "num_base_bdevs_operational": 2, 00:12:35.421 "base_bdevs_list": [ 00:12:35.421 { 00:12:35.421 "name": null, 00:12:35.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.421 "is_configured": false, 00:12:35.421 "data_offset": 0, 00:12:35.421 "data_size": 63488 00:12:35.421 }, 00:12:35.421 { 00:12:35.421 "name": null, 00:12:35.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.421 "is_configured": false, 00:12:35.421 "data_offset": 2048, 00:12:35.421 "data_size": 63488 00:12:35.421 }, 00:12:35.421 { 00:12:35.421 "name": "BaseBdev3", 00:12:35.421 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:35.421 "is_configured": true, 00:12:35.421 "data_offset": 2048, 00:12:35.421 "data_size": 63488 00:12:35.421 }, 00:12:35.421 { 00:12:35.421 "name": "BaseBdev4", 00:12:35.421 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:35.421 "is_configured": true, 00:12:35.421 "data_offset": 2048, 00:12:35.421 "data_size": 63488 00:12:35.421 } 00:12:35.421 ] 00:12:35.421 }' 00:12:35.421 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.421 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.421 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.681 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.681 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:35.681 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.681 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.681 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.681 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:35.681 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.681 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.681 [2024-11-27 21:44:58.602567] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:35.681 [2024-11-27 21:44:58.602632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.681 [2024-11-27 21:44:58.602653] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:35.681 [2024-11-27 21:44:58.602663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.681 [2024-11-27 21:44:58.603063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.681 [2024-11-27 21:44:58.603084] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:35.681 [2024-11-27 21:44:58.603150] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:35.681 [2024-11-27 21:44:58.603166] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:35.681 [2024-11-27 21:44:58.603174] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:35.681 [2024-11-27 21:44:58.603186] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:35.681 BaseBdev1 00:12:35.681 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.681 21:44:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:36.620 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:36.620 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:36.620 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:36.620 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.620 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.620 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:36.620 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.620 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.620 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.620 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.620 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.620 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.620 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.621 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.621 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.621 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.621 "name": "raid_bdev1", 00:12:36.621 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:36.621 "strip_size_kb": 0, 00:12:36.621 "state": "online", 00:12:36.621 "raid_level": "raid1", 00:12:36.621 "superblock": true, 00:12:36.621 "num_base_bdevs": 4, 00:12:36.621 "num_base_bdevs_discovered": 2, 00:12:36.621 "num_base_bdevs_operational": 2, 00:12:36.621 "base_bdevs_list": [ 00:12:36.621 { 00:12:36.621 "name": null, 00:12:36.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.621 "is_configured": false, 00:12:36.621 "data_offset": 0, 00:12:36.621 "data_size": 63488 00:12:36.621 }, 00:12:36.621 { 00:12:36.621 "name": null, 00:12:36.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.621 "is_configured": false, 00:12:36.621 "data_offset": 2048, 00:12:36.621 "data_size": 63488 00:12:36.621 }, 00:12:36.621 { 00:12:36.621 "name": "BaseBdev3", 00:12:36.621 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:36.621 "is_configured": true, 00:12:36.621 "data_offset": 2048, 00:12:36.621 "data_size": 63488 00:12:36.621 }, 00:12:36.621 { 00:12:36.621 "name": "BaseBdev4", 00:12:36.621 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:36.621 "is_configured": true, 00:12:36.621 "data_offset": 2048, 00:12:36.621 "data_size": 63488 00:12:36.621 } 00:12:36.621 ] 00:12:36.621 }' 00:12:36.621 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.621 21:44:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.189 "name": "raid_bdev1", 00:12:37.189 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:37.189 "strip_size_kb": 0, 00:12:37.189 "state": "online", 00:12:37.189 "raid_level": "raid1", 00:12:37.189 "superblock": true, 00:12:37.189 "num_base_bdevs": 4, 00:12:37.189 "num_base_bdevs_discovered": 2, 00:12:37.189 "num_base_bdevs_operational": 2, 00:12:37.189 "base_bdevs_list": [ 00:12:37.189 { 00:12:37.189 "name": null, 00:12:37.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.189 "is_configured": false, 00:12:37.189 "data_offset": 0, 00:12:37.189 "data_size": 63488 00:12:37.189 }, 00:12:37.189 { 00:12:37.189 "name": null, 00:12:37.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.189 "is_configured": false, 00:12:37.189 "data_offset": 2048, 00:12:37.189 "data_size": 63488 00:12:37.189 }, 00:12:37.189 { 00:12:37.189 "name": "BaseBdev3", 00:12:37.189 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:37.189 "is_configured": true, 00:12:37.189 "data_offset": 2048, 00:12:37.189 "data_size": 63488 00:12:37.189 }, 00:12:37.189 { 00:12:37.189 "name": "BaseBdev4", 00:12:37.189 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:37.189 "is_configured": true, 00:12:37.189 "data_offset": 2048, 00:12:37.189 "data_size": 63488 00:12:37.189 } 00:12:37.189 ] 00:12:37.189 }' 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:37.189 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.190 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.190 [2024-11-27 21:45:00.208136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:37.190 [2024-11-27 21:45:00.208340] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:12:37.190 [2024-11-27 21:45:00.208408] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:37.190 request: 00:12:37.190 { 00:12:37.190 "base_bdev": "BaseBdev1", 00:12:37.190 "raid_bdev": "raid_bdev1", 00:12:37.190 "method": "bdev_raid_add_base_bdev", 00:12:37.190 "req_id": 1 00:12:37.190 } 00:12:37.190 Got JSON-RPC error response 00:12:37.190 response: 00:12:37.190 { 00:12:37.190 "code": -22, 00:12:37.190 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:37.190 } 00:12:37.190 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:37.190 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:37.190 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:37.190 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:37.190 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:37.190 21:45:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.148 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.408 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.408 "name": "raid_bdev1", 00:12:38.408 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:38.408 "strip_size_kb": 0, 00:12:38.408 "state": "online", 00:12:38.408 "raid_level": "raid1", 00:12:38.408 "superblock": true, 00:12:38.408 "num_base_bdevs": 4, 00:12:38.408 "num_base_bdevs_discovered": 2, 00:12:38.408 "num_base_bdevs_operational": 2, 00:12:38.408 "base_bdevs_list": [ 00:12:38.408 { 00:12:38.408 "name": null, 00:12:38.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.408 "is_configured": false, 00:12:38.408 "data_offset": 0, 00:12:38.408 "data_size": 63488 00:12:38.408 }, 00:12:38.408 { 00:12:38.408 "name": null, 00:12:38.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.408 "is_configured": false, 00:12:38.408 "data_offset": 2048, 00:12:38.408 "data_size": 63488 00:12:38.408 }, 00:12:38.408 { 00:12:38.408 "name": "BaseBdev3", 00:12:38.408 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:38.408 "is_configured": true, 00:12:38.408 "data_offset": 2048, 00:12:38.408 "data_size": 63488 00:12:38.408 }, 00:12:38.408 { 00:12:38.408 "name": "BaseBdev4", 00:12:38.408 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:38.408 "is_configured": true, 00:12:38.408 "data_offset": 2048, 00:12:38.408 "data_size": 63488 00:12:38.408 } 00:12:38.408 ] 00:12:38.408 }' 00:12:38.408 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.408 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.667 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.667 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.667 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.667 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.667 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.667 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.667 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.667 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.667 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.667 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.667 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.667 "name": "raid_bdev1", 00:12:38.667 "uuid": "735482bf-3e0a-4286-8c4d-e649931a3714", 00:12:38.667 "strip_size_kb": 0, 00:12:38.667 "state": "online", 00:12:38.667 "raid_level": "raid1", 00:12:38.667 "superblock": true, 00:12:38.667 "num_base_bdevs": 4, 00:12:38.667 "num_base_bdevs_discovered": 2, 00:12:38.667 "num_base_bdevs_operational": 2, 00:12:38.667 "base_bdevs_list": [ 00:12:38.667 { 00:12:38.667 "name": null, 00:12:38.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.667 "is_configured": false, 00:12:38.667 "data_offset": 0, 00:12:38.667 "data_size": 63488 00:12:38.667 }, 00:12:38.667 { 00:12:38.667 "name": null, 00:12:38.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.667 "is_configured": false, 00:12:38.667 "data_offset": 2048, 00:12:38.667 "data_size": 63488 00:12:38.667 }, 00:12:38.667 { 00:12:38.667 "name": "BaseBdev3", 00:12:38.667 "uuid": "3cb0d866-667d-5fd6-8771-1844f09fde11", 00:12:38.667 "is_configured": true, 00:12:38.667 "data_offset": 2048, 00:12:38.667 "data_size": 63488 00:12:38.667 }, 00:12:38.667 { 00:12:38.667 "name": "BaseBdev4", 00:12:38.667 "uuid": "deec7b19-88e3-5722-8a34-a316f3d3f8c8", 00:12:38.667 "is_configured": true, 00:12:38.667 "data_offset": 2048, 00:12:38.667 "data_size": 63488 00:12:38.667 } 00:12:38.667 ] 00:12:38.667 }' 00:12:38.667 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.667 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.667 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.927 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.927 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 89431 00:12:38.927 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 89431 ']' 00:12:38.927 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 89431 00:12:38.927 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:38.927 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:38.927 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89431 00:12:38.927 killing process with pid 89431 00:12:38.927 Received shutdown signal, test time was about 17.670580 seconds 00:12:38.927 00:12:38.927 Latency(us) 00:12:38.927 [2024-11-27T21:45:02.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.927 [2024-11-27T21:45:02.048Z] =================================================================================================================== 00:12:38.927 [2024-11-27T21:45:02.048Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:38.927 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:38.927 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:38.927 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89431' 00:12:38.927 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 89431 00:12:38.927 [2024-11-27 21:45:01.848455] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:38.927 [2024-11-27 21:45:01.848578] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:38.927 21:45:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 89431 00:12:38.927 [2024-11-27 21:45:01.848650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:38.927 [2024-11-27 21:45:01.848660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:12:38.927 [2024-11-27 21:45:01.893426] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:39.187 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:39.187 00:12:39.187 real 0m19.639s 00:12:39.187 user 0m26.039s 00:12:39.187 sys 0m2.511s 00:12:39.187 ************************************ 00:12:39.187 END TEST raid_rebuild_test_sb_io 00:12:39.187 ************************************ 00:12:39.187 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.187 21:45:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.187 21:45:02 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:12:39.187 21:45:02 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:12:39.187 21:45:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:39.187 21:45:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.187 21:45:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:39.187 ************************************ 00:12:39.187 START TEST raid5f_state_function_test 00:12:39.187 ************************************ 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=90136 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90136' 00:12:39.187 Process raid pid: 90136 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 90136 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 90136 ']' 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.187 21:45:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.187 [2024-11-27 21:45:02.276690] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:39.187 [2024-11-27 21:45:02.276856] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.447 [2024-11-27 21:45:02.432837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.447 [2024-11-27 21:45:02.456962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.447 [2024-11-27 21:45:02.498358] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.447 [2024-11-27 21:45:02.498397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.013 [2024-11-27 21:45:03.080294] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:40.013 [2024-11-27 21:45:03.080357] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:40.013 [2024-11-27 21:45:03.080380] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:40.013 [2024-11-27 21:45:03.080394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:40.013 [2024-11-27 21:45:03.080401] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:40.013 [2024-11-27 21:45:03.080414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.013 "name": "Existed_Raid", 00:12:40.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.013 "strip_size_kb": 64, 00:12:40.013 "state": "configuring", 00:12:40.013 "raid_level": "raid5f", 00:12:40.013 "superblock": false, 00:12:40.013 "num_base_bdevs": 3, 00:12:40.013 "num_base_bdevs_discovered": 0, 00:12:40.013 "num_base_bdevs_operational": 3, 00:12:40.013 "base_bdevs_list": [ 00:12:40.013 { 00:12:40.013 "name": "BaseBdev1", 00:12:40.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.013 "is_configured": false, 00:12:40.013 "data_offset": 0, 00:12:40.013 "data_size": 0 00:12:40.013 }, 00:12:40.013 { 00:12:40.013 "name": "BaseBdev2", 00:12:40.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.013 "is_configured": false, 00:12:40.013 "data_offset": 0, 00:12:40.013 "data_size": 0 00:12:40.013 }, 00:12:40.013 { 00:12:40.013 "name": "BaseBdev3", 00:12:40.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.013 "is_configured": false, 00:12:40.013 "data_offset": 0, 00:12:40.013 "data_size": 0 00:12:40.013 } 00:12:40.013 ] 00:12:40.013 }' 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.013 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.582 [2024-11-27 21:45:03.507484] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:40.582 [2024-11-27 21:45:03.507524] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.582 [2024-11-27 21:45:03.519488] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:40.582 [2024-11-27 21:45:03.519524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:40.582 [2024-11-27 21:45:03.519531] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:40.582 [2024-11-27 21:45:03.519540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:40.582 [2024-11-27 21:45:03.519546] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:40.582 [2024-11-27 21:45:03.519554] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.582 [2024-11-27 21:45:03.540403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.582 BaseBdev1 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.582 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.582 [ 00:12:40.582 { 00:12:40.582 "name": "BaseBdev1", 00:12:40.582 "aliases": [ 00:12:40.582 "b68599a3-431c-4f92-8261-6668902d573f" 00:12:40.583 ], 00:12:40.583 "product_name": "Malloc disk", 00:12:40.583 "block_size": 512, 00:12:40.583 "num_blocks": 65536, 00:12:40.583 "uuid": "b68599a3-431c-4f92-8261-6668902d573f", 00:12:40.583 "assigned_rate_limits": { 00:12:40.583 "rw_ios_per_sec": 0, 00:12:40.583 "rw_mbytes_per_sec": 0, 00:12:40.583 "r_mbytes_per_sec": 0, 00:12:40.583 "w_mbytes_per_sec": 0 00:12:40.583 }, 00:12:40.583 "claimed": true, 00:12:40.583 "claim_type": "exclusive_write", 00:12:40.583 "zoned": false, 00:12:40.583 "supported_io_types": { 00:12:40.583 "read": true, 00:12:40.583 "write": true, 00:12:40.583 "unmap": true, 00:12:40.583 "flush": true, 00:12:40.583 "reset": true, 00:12:40.583 "nvme_admin": false, 00:12:40.583 "nvme_io": false, 00:12:40.583 "nvme_io_md": false, 00:12:40.583 "write_zeroes": true, 00:12:40.583 "zcopy": true, 00:12:40.583 "get_zone_info": false, 00:12:40.583 "zone_management": false, 00:12:40.583 "zone_append": false, 00:12:40.583 "compare": false, 00:12:40.583 "compare_and_write": false, 00:12:40.583 "abort": true, 00:12:40.583 "seek_hole": false, 00:12:40.583 "seek_data": false, 00:12:40.583 "copy": true, 00:12:40.583 "nvme_iov_md": false 00:12:40.583 }, 00:12:40.583 "memory_domains": [ 00:12:40.583 { 00:12:40.583 "dma_device_id": "system", 00:12:40.583 "dma_device_type": 1 00:12:40.583 }, 00:12:40.583 { 00:12:40.583 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.583 "dma_device_type": 2 00:12:40.583 } 00:12:40.583 ], 00:12:40.583 "driver_specific": {} 00:12:40.583 } 00:12:40.583 ] 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.583 "name": "Existed_Raid", 00:12:40.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.583 "strip_size_kb": 64, 00:12:40.583 "state": "configuring", 00:12:40.583 "raid_level": "raid5f", 00:12:40.583 "superblock": false, 00:12:40.583 "num_base_bdevs": 3, 00:12:40.583 "num_base_bdevs_discovered": 1, 00:12:40.583 "num_base_bdevs_operational": 3, 00:12:40.583 "base_bdevs_list": [ 00:12:40.583 { 00:12:40.583 "name": "BaseBdev1", 00:12:40.583 "uuid": "b68599a3-431c-4f92-8261-6668902d573f", 00:12:40.583 "is_configured": true, 00:12:40.583 "data_offset": 0, 00:12:40.583 "data_size": 65536 00:12:40.583 }, 00:12:40.583 { 00:12:40.583 "name": "BaseBdev2", 00:12:40.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.583 "is_configured": false, 00:12:40.583 "data_offset": 0, 00:12:40.583 "data_size": 0 00:12:40.583 }, 00:12:40.583 { 00:12:40.583 "name": "BaseBdev3", 00:12:40.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.583 "is_configured": false, 00:12:40.583 "data_offset": 0, 00:12:40.583 "data_size": 0 00:12:40.583 } 00:12:40.583 ] 00:12:40.583 }' 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.583 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.153 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:41.153 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.153 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.153 [2024-11-27 21:45:03.983740] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:41.153 [2024-11-27 21:45:03.983790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:12:41.153 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.153 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:41.153 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.153 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.153 [2024-11-27 21:45:03.995744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:41.153 [2024-11-27 21:45:03.997577] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:41.153 [2024-11-27 21:45:03.997613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:41.153 [2024-11-27 21:45:03.997622] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:41.153 [2024-11-27 21:45:03.997632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:41.153 21:45:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.153 21:45:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.153 "name": "Existed_Raid", 00:12:41.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.153 "strip_size_kb": 64, 00:12:41.153 "state": "configuring", 00:12:41.153 "raid_level": "raid5f", 00:12:41.153 "superblock": false, 00:12:41.153 "num_base_bdevs": 3, 00:12:41.153 "num_base_bdevs_discovered": 1, 00:12:41.153 "num_base_bdevs_operational": 3, 00:12:41.153 "base_bdevs_list": [ 00:12:41.153 { 00:12:41.153 "name": "BaseBdev1", 00:12:41.153 "uuid": "b68599a3-431c-4f92-8261-6668902d573f", 00:12:41.153 "is_configured": true, 00:12:41.153 "data_offset": 0, 00:12:41.153 "data_size": 65536 00:12:41.153 }, 00:12:41.153 { 00:12:41.153 "name": "BaseBdev2", 00:12:41.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.153 "is_configured": false, 00:12:41.153 "data_offset": 0, 00:12:41.153 "data_size": 0 00:12:41.153 }, 00:12:41.153 { 00:12:41.153 "name": "BaseBdev3", 00:12:41.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.153 "is_configured": false, 00:12:41.153 "data_offset": 0, 00:12:41.153 "data_size": 0 00:12:41.153 } 00:12:41.153 ] 00:12:41.153 }' 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.153 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.414 [2024-11-27 21:45:04.445802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.414 BaseBdev2 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.414 [ 00:12:41.414 { 00:12:41.414 "name": "BaseBdev2", 00:12:41.414 "aliases": [ 00:12:41.414 "1b564516-1603-41d8-bc25-61f2b4433cdb" 00:12:41.414 ], 00:12:41.414 "product_name": "Malloc disk", 00:12:41.414 "block_size": 512, 00:12:41.414 "num_blocks": 65536, 00:12:41.414 "uuid": "1b564516-1603-41d8-bc25-61f2b4433cdb", 00:12:41.414 "assigned_rate_limits": { 00:12:41.414 "rw_ios_per_sec": 0, 00:12:41.414 "rw_mbytes_per_sec": 0, 00:12:41.414 "r_mbytes_per_sec": 0, 00:12:41.414 "w_mbytes_per_sec": 0 00:12:41.414 }, 00:12:41.414 "claimed": true, 00:12:41.414 "claim_type": "exclusive_write", 00:12:41.414 "zoned": false, 00:12:41.414 "supported_io_types": { 00:12:41.414 "read": true, 00:12:41.414 "write": true, 00:12:41.414 "unmap": true, 00:12:41.414 "flush": true, 00:12:41.414 "reset": true, 00:12:41.414 "nvme_admin": false, 00:12:41.414 "nvme_io": false, 00:12:41.414 "nvme_io_md": false, 00:12:41.414 "write_zeroes": true, 00:12:41.414 "zcopy": true, 00:12:41.414 "get_zone_info": false, 00:12:41.414 "zone_management": false, 00:12:41.414 "zone_append": false, 00:12:41.414 "compare": false, 00:12:41.414 "compare_and_write": false, 00:12:41.414 "abort": true, 00:12:41.414 "seek_hole": false, 00:12:41.414 "seek_data": false, 00:12:41.414 "copy": true, 00:12:41.414 "nvme_iov_md": false 00:12:41.414 }, 00:12:41.414 "memory_domains": [ 00:12:41.414 { 00:12:41.414 "dma_device_id": "system", 00:12:41.414 "dma_device_type": 1 00:12:41.414 }, 00:12:41.414 { 00:12:41.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.414 "dma_device_type": 2 00:12:41.414 } 00:12:41.414 ], 00:12:41.414 "driver_specific": {} 00:12:41.414 } 00:12:41.414 ] 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.414 "name": "Existed_Raid", 00:12:41.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.414 "strip_size_kb": 64, 00:12:41.414 "state": "configuring", 00:12:41.414 "raid_level": "raid5f", 00:12:41.414 "superblock": false, 00:12:41.414 "num_base_bdevs": 3, 00:12:41.414 "num_base_bdevs_discovered": 2, 00:12:41.414 "num_base_bdevs_operational": 3, 00:12:41.414 "base_bdevs_list": [ 00:12:41.414 { 00:12:41.414 "name": "BaseBdev1", 00:12:41.414 "uuid": "b68599a3-431c-4f92-8261-6668902d573f", 00:12:41.414 "is_configured": true, 00:12:41.414 "data_offset": 0, 00:12:41.414 "data_size": 65536 00:12:41.414 }, 00:12:41.414 { 00:12:41.414 "name": "BaseBdev2", 00:12:41.414 "uuid": "1b564516-1603-41d8-bc25-61f2b4433cdb", 00:12:41.414 "is_configured": true, 00:12:41.414 "data_offset": 0, 00:12:41.414 "data_size": 65536 00:12:41.414 }, 00:12:41.414 { 00:12:41.414 "name": "BaseBdev3", 00:12:41.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.414 "is_configured": false, 00:12:41.414 "data_offset": 0, 00:12:41.414 "data_size": 0 00:12:41.414 } 00:12:41.414 ] 00:12:41.414 }' 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.414 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.983 [2024-11-27 21:45:04.901029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.983 [2024-11-27 21:45:04.901163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:41.983 [2024-11-27 21:45:04.901211] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:41.983 [2024-11-27 21:45:04.901992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:41.983 [2024-11-27 21:45:04.903077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:41.983 [2024-11-27 21:45:04.903111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:12:41.983 [2024-11-27 21:45:04.903527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.983 BaseBdev3 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.983 [ 00:12:41.983 { 00:12:41.983 "name": "BaseBdev3", 00:12:41.983 "aliases": [ 00:12:41.983 "dd4b0293-9a17-419d-ac45-e611ebd763b1" 00:12:41.983 ], 00:12:41.983 "product_name": "Malloc disk", 00:12:41.983 "block_size": 512, 00:12:41.983 "num_blocks": 65536, 00:12:41.983 "uuid": "dd4b0293-9a17-419d-ac45-e611ebd763b1", 00:12:41.983 "assigned_rate_limits": { 00:12:41.983 "rw_ios_per_sec": 0, 00:12:41.983 "rw_mbytes_per_sec": 0, 00:12:41.983 "r_mbytes_per_sec": 0, 00:12:41.983 "w_mbytes_per_sec": 0 00:12:41.983 }, 00:12:41.983 "claimed": true, 00:12:41.983 "claim_type": "exclusive_write", 00:12:41.983 "zoned": false, 00:12:41.983 "supported_io_types": { 00:12:41.983 "read": true, 00:12:41.983 "write": true, 00:12:41.983 "unmap": true, 00:12:41.983 "flush": true, 00:12:41.983 "reset": true, 00:12:41.983 "nvme_admin": false, 00:12:41.983 "nvme_io": false, 00:12:41.983 "nvme_io_md": false, 00:12:41.983 "write_zeroes": true, 00:12:41.983 "zcopy": true, 00:12:41.983 "get_zone_info": false, 00:12:41.983 "zone_management": false, 00:12:41.983 "zone_append": false, 00:12:41.983 "compare": false, 00:12:41.983 "compare_and_write": false, 00:12:41.983 "abort": true, 00:12:41.983 "seek_hole": false, 00:12:41.983 "seek_data": false, 00:12:41.983 "copy": true, 00:12:41.983 "nvme_iov_md": false 00:12:41.983 }, 00:12:41.983 "memory_domains": [ 00:12:41.983 { 00:12:41.983 "dma_device_id": "system", 00:12:41.983 "dma_device_type": 1 00:12:41.983 }, 00:12:41.983 { 00:12:41.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.983 "dma_device_type": 2 00:12:41.983 } 00:12:41.983 ], 00:12:41.983 "driver_specific": {} 00:12:41.983 } 00:12:41.983 ] 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.983 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.983 "name": "Existed_Raid", 00:12:41.983 "uuid": "099cf517-30ed-4972-a3ea-1440308ae2b8", 00:12:41.983 "strip_size_kb": 64, 00:12:41.983 "state": "online", 00:12:41.984 "raid_level": "raid5f", 00:12:41.984 "superblock": false, 00:12:41.984 "num_base_bdevs": 3, 00:12:41.984 "num_base_bdevs_discovered": 3, 00:12:41.984 "num_base_bdevs_operational": 3, 00:12:41.984 "base_bdevs_list": [ 00:12:41.984 { 00:12:41.984 "name": "BaseBdev1", 00:12:41.984 "uuid": "b68599a3-431c-4f92-8261-6668902d573f", 00:12:41.984 "is_configured": true, 00:12:41.984 "data_offset": 0, 00:12:41.984 "data_size": 65536 00:12:41.984 }, 00:12:41.984 { 00:12:41.984 "name": "BaseBdev2", 00:12:41.984 "uuid": "1b564516-1603-41d8-bc25-61f2b4433cdb", 00:12:41.984 "is_configured": true, 00:12:41.984 "data_offset": 0, 00:12:41.984 "data_size": 65536 00:12:41.984 }, 00:12:41.984 { 00:12:41.984 "name": "BaseBdev3", 00:12:41.984 "uuid": "dd4b0293-9a17-419d-ac45-e611ebd763b1", 00:12:41.984 "is_configured": true, 00:12:41.984 "data_offset": 0, 00:12:41.984 "data_size": 65536 00:12:41.984 } 00:12:41.984 ] 00:12:41.984 }' 00:12:41.984 21:45:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.984 21:45:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.242 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:42.242 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:42.242 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:42.242 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:42.242 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:42.242 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:42.242 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:42.242 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:42.242 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.242 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.502 [2024-11-27 21:45:05.365092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:42.502 "name": "Existed_Raid", 00:12:42.502 "aliases": [ 00:12:42.502 "099cf517-30ed-4972-a3ea-1440308ae2b8" 00:12:42.502 ], 00:12:42.502 "product_name": "Raid Volume", 00:12:42.502 "block_size": 512, 00:12:42.502 "num_blocks": 131072, 00:12:42.502 "uuid": "099cf517-30ed-4972-a3ea-1440308ae2b8", 00:12:42.502 "assigned_rate_limits": { 00:12:42.502 "rw_ios_per_sec": 0, 00:12:42.502 "rw_mbytes_per_sec": 0, 00:12:42.502 "r_mbytes_per_sec": 0, 00:12:42.502 "w_mbytes_per_sec": 0 00:12:42.502 }, 00:12:42.502 "claimed": false, 00:12:42.502 "zoned": false, 00:12:42.502 "supported_io_types": { 00:12:42.502 "read": true, 00:12:42.502 "write": true, 00:12:42.502 "unmap": false, 00:12:42.502 "flush": false, 00:12:42.502 "reset": true, 00:12:42.502 "nvme_admin": false, 00:12:42.502 "nvme_io": false, 00:12:42.502 "nvme_io_md": false, 00:12:42.502 "write_zeroes": true, 00:12:42.502 "zcopy": false, 00:12:42.502 "get_zone_info": false, 00:12:42.502 "zone_management": false, 00:12:42.502 "zone_append": false, 00:12:42.502 "compare": false, 00:12:42.502 "compare_and_write": false, 00:12:42.502 "abort": false, 00:12:42.502 "seek_hole": false, 00:12:42.502 "seek_data": false, 00:12:42.502 "copy": false, 00:12:42.502 "nvme_iov_md": false 00:12:42.502 }, 00:12:42.502 "driver_specific": { 00:12:42.502 "raid": { 00:12:42.502 "uuid": "099cf517-30ed-4972-a3ea-1440308ae2b8", 00:12:42.502 "strip_size_kb": 64, 00:12:42.502 "state": "online", 00:12:42.502 "raid_level": "raid5f", 00:12:42.502 "superblock": false, 00:12:42.502 "num_base_bdevs": 3, 00:12:42.502 "num_base_bdevs_discovered": 3, 00:12:42.502 "num_base_bdevs_operational": 3, 00:12:42.502 "base_bdevs_list": [ 00:12:42.502 { 00:12:42.502 "name": "BaseBdev1", 00:12:42.502 "uuid": "b68599a3-431c-4f92-8261-6668902d573f", 00:12:42.502 "is_configured": true, 00:12:42.502 "data_offset": 0, 00:12:42.502 "data_size": 65536 00:12:42.502 }, 00:12:42.502 { 00:12:42.502 "name": "BaseBdev2", 00:12:42.502 "uuid": "1b564516-1603-41d8-bc25-61f2b4433cdb", 00:12:42.502 "is_configured": true, 00:12:42.502 "data_offset": 0, 00:12:42.502 "data_size": 65536 00:12:42.502 }, 00:12:42.502 { 00:12:42.502 "name": "BaseBdev3", 00:12:42.502 "uuid": "dd4b0293-9a17-419d-ac45-e611ebd763b1", 00:12:42.502 "is_configured": true, 00:12:42.502 "data_offset": 0, 00:12:42.502 "data_size": 65536 00:12:42.502 } 00:12:42.502 ] 00:12:42.502 } 00:12:42.502 } 00:12:42.502 }' 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:42.502 BaseBdev2 00:12:42.502 BaseBdev3' 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.502 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.503 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.503 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.503 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.503 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.503 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:42.503 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:42.503 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.503 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:42.503 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.503 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.762 [2024-11-27 21:45:05.632427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.762 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.762 "name": "Existed_Raid", 00:12:42.762 "uuid": "099cf517-30ed-4972-a3ea-1440308ae2b8", 00:12:42.762 "strip_size_kb": 64, 00:12:42.762 "state": "online", 00:12:42.762 "raid_level": "raid5f", 00:12:42.762 "superblock": false, 00:12:42.762 "num_base_bdevs": 3, 00:12:42.762 "num_base_bdevs_discovered": 2, 00:12:42.762 "num_base_bdevs_operational": 2, 00:12:42.762 "base_bdevs_list": [ 00:12:42.762 { 00:12:42.762 "name": null, 00:12:42.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.762 "is_configured": false, 00:12:42.762 "data_offset": 0, 00:12:42.762 "data_size": 65536 00:12:42.762 }, 00:12:42.762 { 00:12:42.762 "name": "BaseBdev2", 00:12:42.762 "uuid": "1b564516-1603-41d8-bc25-61f2b4433cdb", 00:12:42.762 "is_configured": true, 00:12:42.762 "data_offset": 0, 00:12:42.762 "data_size": 65536 00:12:42.762 }, 00:12:42.762 { 00:12:42.762 "name": "BaseBdev3", 00:12:42.762 "uuid": "dd4b0293-9a17-419d-ac45-e611ebd763b1", 00:12:42.762 "is_configured": true, 00:12:42.762 "data_offset": 0, 00:12:42.763 "data_size": 65536 00:12:42.763 } 00:12:42.763 ] 00:12:42.763 }' 00:12:42.763 21:45:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.763 21:45:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.022 [2024-11-27 21:45:06.126787] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:43.022 [2024-11-27 21:45:06.126897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.022 [2024-11-27 21:45:06.137858] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:43.022 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.327 [2024-11-27 21:45:06.181808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:43.327 [2024-11-27 21:45:06.181854] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.327 BaseBdev2 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.327 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.328 [ 00:12:43.328 { 00:12:43.328 "name": "BaseBdev2", 00:12:43.328 "aliases": [ 00:12:43.328 "5126ecab-66b4-4a2e-af19-bba2aef17506" 00:12:43.328 ], 00:12:43.328 "product_name": "Malloc disk", 00:12:43.328 "block_size": 512, 00:12:43.328 "num_blocks": 65536, 00:12:43.328 "uuid": "5126ecab-66b4-4a2e-af19-bba2aef17506", 00:12:43.328 "assigned_rate_limits": { 00:12:43.328 "rw_ios_per_sec": 0, 00:12:43.328 "rw_mbytes_per_sec": 0, 00:12:43.328 "r_mbytes_per_sec": 0, 00:12:43.328 "w_mbytes_per_sec": 0 00:12:43.328 }, 00:12:43.328 "claimed": false, 00:12:43.328 "zoned": false, 00:12:43.328 "supported_io_types": { 00:12:43.328 "read": true, 00:12:43.328 "write": true, 00:12:43.328 "unmap": true, 00:12:43.328 "flush": true, 00:12:43.328 "reset": true, 00:12:43.328 "nvme_admin": false, 00:12:43.328 "nvme_io": false, 00:12:43.328 "nvme_io_md": false, 00:12:43.328 "write_zeroes": true, 00:12:43.328 "zcopy": true, 00:12:43.328 "get_zone_info": false, 00:12:43.328 "zone_management": false, 00:12:43.328 "zone_append": false, 00:12:43.328 "compare": false, 00:12:43.328 "compare_and_write": false, 00:12:43.328 "abort": true, 00:12:43.328 "seek_hole": false, 00:12:43.328 "seek_data": false, 00:12:43.328 "copy": true, 00:12:43.328 "nvme_iov_md": false 00:12:43.328 }, 00:12:43.328 "memory_domains": [ 00:12:43.328 { 00:12:43.328 "dma_device_id": "system", 00:12:43.328 "dma_device_type": 1 00:12:43.328 }, 00:12:43.328 { 00:12:43.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.328 "dma_device_type": 2 00:12:43.328 } 00:12:43.328 ], 00:12:43.328 "driver_specific": {} 00:12:43.328 } 00:12:43.328 ] 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.328 BaseBdev3 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.328 [ 00:12:43.328 { 00:12:43.328 "name": "BaseBdev3", 00:12:43.328 "aliases": [ 00:12:43.328 "81761059-443c-49e4-b325-de0e9f1ddcb1" 00:12:43.328 ], 00:12:43.328 "product_name": "Malloc disk", 00:12:43.328 "block_size": 512, 00:12:43.328 "num_blocks": 65536, 00:12:43.328 "uuid": "81761059-443c-49e4-b325-de0e9f1ddcb1", 00:12:43.328 "assigned_rate_limits": { 00:12:43.328 "rw_ios_per_sec": 0, 00:12:43.328 "rw_mbytes_per_sec": 0, 00:12:43.328 "r_mbytes_per_sec": 0, 00:12:43.328 "w_mbytes_per_sec": 0 00:12:43.328 }, 00:12:43.328 "claimed": false, 00:12:43.328 "zoned": false, 00:12:43.328 "supported_io_types": { 00:12:43.328 "read": true, 00:12:43.328 "write": true, 00:12:43.328 "unmap": true, 00:12:43.328 "flush": true, 00:12:43.328 "reset": true, 00:12:43.328 "nvme_admin": false, 00:12:43.328 "nvme_io": false, 00:12:43.328 "nvme_io_md": false, 00:12:43.328 "write_zeroes": true, 00:12:43.328 "zcopy": true, 00:12:43.328 "get_zone_info": false, 00:12:43.328 "zone_management": false, 00:12:43.328 "zone_append": false, 00:12:43.328 "compare": false, 00:12:43.328 "compare_and_write": false, 00:12:43.328 "abort": true, 00:12:43.328 "seek_hole": false, 00:12:43.328 "seek_data": false, 00:12:43.328 "copy": true, 00:12:43.328 "nvme_iov_md": false 00:12:43.328 }, 00:12:43.328 "memory_domains": [ 00:12:43.328 { 00:12:43.328 "dma_device_id": "system", 00:12:43.328 "dma_device_type": 1 00:12:43.328 }, 00:12:43.328 { 00:12:43.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.328 "dma_device_type": 2 00:12:43.328 } 00:12:43.328 ], 00:12:43.328 "driver_specific": {} 00:12:43.328 } 00:12:43.328 ] 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.328 [2024-11-27 21:45:06.348416] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.328 [2024-11-27 21:45:06.348456] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.328 [2024-11-27 21:45:06.348476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.328 [2024-11-27 21:45:06.350238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.328 "name": "Existed_Raid", 00:12:43.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.328 "strip_size_kb": 64, 00:12:43.328 "state": "configuring", 00:12:43.328 "raid_level": "raid5f", 00:12:43.328 "superblock": false, 00:12:43.328 "num_base_bdevs": 3, 00:12:43.328 "num_base_bdevs_discovered": 2, 00:12:43.328 "num_base_bdevs_operational": 3, 00:12:43.328 "base_bdevs_list": [ 00:12:43.328 { 00:12:43.328 "name": "BaseBdev1", 00:12:43.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.328 "is_configured": false, 00:12:43.328 "data_offset": 0, 00:12:43.328 "data_size": 0 00:12:43.328 }, 00:12:43.328 { 00:12:43.328 "name": "BaseBdev2", 00:12:43.328 "uuid": "5126ecab-66b4-4a2e-af19-bba2aef17506", 00:12:43.328 "is_configured": true, 00:12:43.328 "data_offset": 0, 00:12:43.328 "data_size": 65536 00:12:43.328 }, 00:12:43.328 { 00:12:43.328 "name": "BaseBdev3", 00:12:43.328 "uuid": "81761059-443c-49e4-b325-de0e9f1ddcb1", 00:12:43.328 "is_configured": true, 00:12:43.328 "data_offset": 0, 00:12:43.328 "data_size": 65536 00:12:43.328 } 00:12:43.328 ] 00:12:43.328 }' 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.328 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.898 [2024-11-27 21:45:06.799691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.898 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.899 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.899 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.899 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.899 "name": "Existed_Raid", 00:12:43.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.899 "strip_size_kb": 64, 00:12:43.899 "state": "configuring", 00:12:43.899 "raid_level": "raid5f", 00:12:43.899 "superblock": false, 00:12:43.899 "num_base_bdevs": 3, 00:12:43.899 "num_base_bdevs_discovered": 1, 00:12:43.899 "num_base_bdevs_operational": 3, 00:12:43.899 "base_bdevs_list": [ 00:12:43.899 { 00:12:43.899 "name": "BaseBdev1", 00:12:43.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.899 "is_configured": false, 00:12:43.899 "data_offset": 0, 00:12:43.899 "data_size": 0 00:12:43.899 }, 00:12:43.899 { 00:12:43.899 "name": null, 00:12:43.899 "uuid": "5126ecab-66b4-4a2e-af19-bba2aef17506", 00:12:43.899 "is_configured": false, 00:12:43.899 "data_offset": 0, 00:12:43.899 "data_size": 65536 00:12:43.899 }, 00:12:43.899 { 00:12:43.899 "name": "BaseBdev3", 00:12:43.899 "uuid": "81761059-443c-49e4-b325-de0e9f1ddcb1", 00:12:43.899 "is_configured": true, 00:12:43.899 "data_offset": 0, 00:12:43.899 "data_size": 65536 00:12:43.899 } 00:12:43.899 ] 00:12:43.899 }' 00:12:43.899 21:45:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.899 21:45:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.159 [2024-11-27 21:45:07.241766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.159 BaseBdev1 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.159 [ 00:12:44.159 { 00:12:44.159 "name": "BaseBdev1", 00:12:44.159 "aliases": [ 00:12:44.159 "46ef0f96-53a7-4cd3-8230-cb181a138699" 00:12:44.159 ], 00:12:44.159 "product_name": "Malloc disk", 00:12:44.159 "block_size": 512, 00:12:44.159 "num_blocks": 65536, 00:12:44.159 "uuid": "46ef0f96-53a7-4cd3-8230-cb181a138699", 00:12:44.159 "assigned_rate_limits": { 00:12:44.159 "rw_ios_per_sec": 0, 00:12:44.159 "rw_mbytes_per_sec": 0, 00:12:44.159 "r_mbytes_per_sec": 0, 00:12:44.159 "w_mbytes_per_sec": 0 00:12:44.159 }, 00:12:44.159 "claimed": true, 00:12:44.159 "claim_type": "exclusive_write", 00:12:44.159 "zoned": false, 00:12:44.159 "supported_io_types": { 00:12:44.159 "read": true, 00:12:44.159 "write": true, 00:12:44.159 "unmap": true, 00:12:44.159 "flush": true, 00:12:44.159 "reset": true, 00:12:44.159 "nvme_admin": false, 00:12:44.159 "nvme_io": false, 00:12:44.159 "nvme_io_md": false, 00:12:44.159 "write_zeroes": true, 00:12:44.159 "zcopy": true, 00:12:44.159 "get_zone_info": false, 00:12:44.159 "zone_management": false, 00:12:44.159 "zone_append": false, 00:12:44.159 "compare": false, 00:12:44.159 "compare_and_write": false, 00:12:44.159 "abort": true, 00:12:44.159 "seek_hole": false, 00:12:44.159 "seek_data": false, 00:12:44.159 "copy": true, 00:12:44.159 "nvme_iov_md": false 00:12:44.159 }, 00:12:44.159 "memory_domains": [ 00:12:44.159 { 00:12:44.159 "dma_device_id": "system", 00:12:44.159 "dma_device_type": 1 00:12:44.159 }, 00:12:44.159 { 00:12:44.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.159 "dma_device_type": 2 00:12:44.159 } 00:12:44.159 ], 00:12:44.159 "driver_specific": {} 00:12:44.159 } 00:12:44.159 ] 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.159 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.418 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.418 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.418 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.418 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.418 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.418 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.418 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.418 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.418 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.418 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.418 "name": "Existed_Raid", 00:12:44.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.418 "strip_size_kb": 64, 00:12:44.418 "state": "configuring", 00:12:44.418 "raid_level": "raid5f", 00:12:44.418 "superblock": false, 00:12:44.418 "num_base_bdevs": 3, 00:12:44.418 "num_base_bdevs_discovered": 2, 00:12:44.418 "num_base_bdevs_operational": 3, 00:12:44.418 "base_bdevs_list": [ 00:12:44.418 { 00:12:44.418 "name": "BaseBdev1", 00:12:44.418 "uuid": "46ef0f96-53a7-4cd3-8230-cb181a138699", 00:12:44.418 "is_configured": true, 00:12:44.418 "data_offset": 0, 00:12:44.418 "data_size": 65536 00:12:44.418 }, 00:12:44.418 { 00:12:44.418 "name": null, 00:12:44.418 "uuid": "5126ecab-66b4-4a2e-af19-bba2aef17506", 00:12:44.418 "is_configured": false, 00:12:44.418 "data_offset": 0, 00:12:44.418 "data_size": 65536 00:12:44.418 }, 00:12:44.418 { 00:12:44.418 "name": "BaseBdev3", 00:12:44.418 "uuid": "81761059-443c-49e4-b325-de0e9f1ddcb1", 00:12:44.418 "is_configured": true, 00:12:44.418 "data_offset": 0, 00:12:44.418 "data_size": 65536 00:12:44.418 } 00:12:44.418 ] 00:12:44.418 }' 00:12:44.418 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.418 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.677 [2024-11-27 21:45:07.772912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.677 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.937 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.937 "name": "Existed_Raid", 00:12:44.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.937 "strip_size_kb": 64, 00:12:44.937 "state": "configuring", 00:12:44.937 "raid_level": "raid5f", 00:12:44.937 "superblock": false, 00:12:44.937 "num_base_bdevs": 3, 00:12:44.937 "num_base_bdevs_discovered": 1, 00:12:44.937 "num_base_bdevs_operational": 3, 00:12:44.937 "base_bdevs_list": [ 00:12:44.937 { 00:12:44.937 "name": "BaseBdev1", 00:12:44.937 "uuid": "46ef0f96-53a7-4cd3-8230-cb181a138699", 00:12:44.937 "is_configured": true, 00:12:44.937 "data_offset": 0, 00:12:44.937 "data_size": 65536 00:12:44.937 }, 00:12:44.937 { 00:12:44.937 "name": null, 00:12:44.937 "uuid": "5126ecab-66b4-4a2e-af19-bba2aef17506", 00:12:44.937 "is_configured": false, 00:12:44.937 "data_offset": 0, 00:12:44.937 "data_size": 65536 00:12:44.937 }, 00:12:44.937 { 00:12:44.937 "name": null, 00:12:44.937 "uuid": "81761059-443c-49e4-b325-de0e9f1ddcb1", 00:12:44.937 "is_configured": false, 00:12:44.937 "data_offset": 0, 00:12:44.937 "data_size": 65536 00:12:44.937 } 00:12:44.937 ] 00:12:44.937 }' 00:12:44.937 21:45:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.937 21:45:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.196 [2024-11-27 21:45:08.232179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.196 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.196 "name": "Existed_Raid", 00:12:45.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.196 "strip_size_kb": 64, 00:12:45.196 "state": "configuring", 00:12:45.196 "raid_level": "raid5f", 00:12:45.196 "superblock": false, 00:12:45.196 "num_base_bdevs": 3, 00:12:45.196 "num_base_bdevs_discovered": 2, 00:12:45.196 "num_base_bdevs_operational": 3, 00:12:45.196 "base_bdevs_list": [ 00:12:45.196 { 00:12:45.196 "name": "BaseBdev1", 00:12:45.196 "uuid": "46ef0f96-53a7-4cd3-8230-cb181a138699", 00:12:45.196 "is_configured": true, 00:12:45.196 "data_offset": 0, 00:12:45.196 "data_size": 65536 00:12:45.196 }, 00:12:45.196 { 00:12:45.196 "name": null, 00:12:45.196 "uuid": "5126ecab-66b4-4a2e-af19-bba2aef17506", 00:12:45.197 "is_configured": false, 00:12:45.197 "data_offset": 0, 00:12:45.197 "data_size": 65536 00:12:45.197 }, 00:12:45.197 { 00:12:45.197 "name": "BaseBdev3", 00:12:45.197 "uuid": "81761059-443c-49e4-b325-de0e9f1ddcb1", 00:12:45.197 "is_configured": true, 00:12:45.197 "data_offset": 0, 00:12:45.197 "data_size": 65536 00:12:45.197 } 00:12:45.197 ] 00:12:45.197 }' 00:12:45.197 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.197 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.766 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.766 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:45.766 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.766 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.766 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.766 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:45.766 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:45.766 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.766 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.766 [2024-11-27 21:45:08.671490] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:45.766 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.766 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:45.766 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.766 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.766 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:45.767 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.767 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:45.767 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.767 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.767 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.767 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.767 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.767 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.767 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.767 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.767 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.767 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.767 "name": "Existed_Raid", 00:12:45.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.767 "strip_size_kb": 64, 00:12:45.767 "state": "configuring", 00:12:45.767 "raid_level": "raid5f", 00:12:45.767 "superblock": false, 00:12:45.767 "num_base_bdevs": 3, 00:12:45.767 "num_base_bdevs_discovered": 1, 00:12:45.767 "num_base_bdevs_operational": 3, 00:12:45.767 "base_bdevs_list": [ 00:12:45.767 { 00:12:45.767 "name": null, 00:12:45.767 "uuid": "46ef0f96-53a7-4cd3-8230-cb181a138699", 00:12:45.767 "is_configured": false, 00:12:45.767 "data_offset": 0, 00:12:45.767 "data_size": 65536 00:12:45.767 }, 00:12:45.767 { 00:12:45.767 "name": null, 00:12:45.767 "uuid": "5126ecab-66b4-4a2e-af19-bba2aef17506", 00:12:45.767 "is_configured": false, 00:12:45.767 "data_offset": 0, 00:12:45.767 "data_size": 65536 00:12:45.767 }, 00:12:45.767 { 00:12:45.767 "name": "BaseBdev3", 00:12:45.767 "uuid": "81761059-443c-49e4-b325-de0e9f1ddcb1", 00:12:45.767 "is_configured": true, 00:12:45.767 "data_offset": 0, 00:12:45.767 "data_size": 65536 00:12:45.767 } 00:12:45.767 ] 00:12:45.767 }' 00:12:45.767 21:45:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.767 21:45:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.026 [2024-11-27 21:45:09.109288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.026 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.284 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.284 "name": "Existed_Raid", 00:12:46.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.284 "strip_size_kb": 64, 00:12:46.284 "state": "configuring", 00:12:46.284 "raid_level": "raid5f", 00:12:46.284 "superblock": false, 00:12:46.284 "num_base_bdevs": 3, 00:12:46.284 "num_base_bdevs_discovered": 2, 00:12:46.284 "num_base_bdevs_operational": 3, 00:12:46.284 "base_bdevs_list": [ 00:12:46.284 { 00:12:46.284 "name": null, 00:12:46.284 "uuid": "46ef0f96-53a7-4cd3-8230-cb181a138699", 00:12:46.284 "is_configured": false, 00:12:46.284 "data_offset": 0, 00:12:46.284 "data_size": 65536 00:12:46.284 }, 00:12:46.284 { 00:12:46.284 "name": "BaseBdev2", 00:12:46.284 "uuid": "5126ecab-66b4-4a2e-af19-bba2aef17506", 00:12:46.284 "is_configured": true, 00:12:46.284 "data_offset": 0, 00:12:46.284 "data_size": 65536 00:12:46.284 }, 00:12:46.284 { 00:12:46.284 "name": "BaseBdev3", 00:12:46.284 "uuid": "81761059-443c-49e4-b325-de0e9f1ddcb1", 00:12:46.284 "is_configured": true, 00:12:46.284 "data_offset": 0, 00:12:46.284 "data_size": 65536 00:12:46.284 } 00:12:46.284 ] 00:12:46.284 }' 00:12:46.284 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.284 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 46ef0f96-53a7-4cd3-8230-cb181a138699 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.543 [2024-11-27 21:45:09.615417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:46.543 [2024-11-27 21:45:09.615464] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:46.543 [2024-11-27 21:45:09.615474] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:12:46.543 [2024-11-27 21:45:09.615701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:46.543 [2024-11-27 21:45:09.616177] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:46.543 [2024-11-27 21:45:09.616199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:12:46.543 [2024-11-27 21:45:09.616388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.543 NewBaseBdev 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.543 [ 00:12:46.543 { 00:12:46.543 "name": "NewBaseBdev", 00:12:46.543 "aliases": [ 00:12:46.543 "46ef0f96-53a7-4cd3-8230-cb181a138699" 00:12:46.543 ], 00:12:46.543 "product_name": "Malloc disk", 00:12:46.543 "block_size": 512, 00:12:46.543 "num_blocks": 65536, 00:12:46.543 "uuid": "46ef0f96-53a7-4cd3-8230-cb181a138699", 00:12:46.543 "assigned_rate_limits": { 00:12:46.543 "rw_ios_per_sec": 0, 00:12:46.543 "rw_mbytes_per_sec": 0, 00:12:46.543 "r_mbytes_per_sec": 0, 00:12:46.543 "w_mbytes_per_sec": 0 00:12:46.543 }, 00:12:46.543 "claimed": true, 00:12:46.543 "claim_type": "exclusive_write", 00:12:46.543 "zoned": false, 00:12:46.543 "supported_io_types": { 00:12:46.543 "read": true, 00:12:46.543 "write": true, 00:12:46.543 "unmap": true, 00:12:46.543 "flush": true, 00:12:46.543 "reset": true, 00:12:46.543 "nvme_admin": false, 00:12:46.543 "nvme_io": false, 00:12:46.543 "nvme_io_md": false, 00:12:46.543 "write_zeroes": true, 00:12:46.543 "zcopy": true, 00:12:46.543 "get_zone_info": false, 00:12:46.543 "zone_management": false, 00:12:46.543 "zone_append": false, 00:12:46.543 "compare": false, 00:12:46.543 "compare_and_write": false, 00:12:46.543 "abort": true, 00:12:46.543 "seek_hole": false, 00:12:46.543 "seek_data": false, 00:12:46.543 "copy": true, 00:12:46.543 "nvme_iov_md": false 00:12:46.543 }, 00:12:46.543 "memory_domains": [ 00:12:46.543 { 00:12:46.543 "dma_device_id": "system", 00:12:46.543 "dma_device_type": 1 00:12:46.543 }, 00:12:46.543 { 00:12:46.543 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:46.543 "dma_device_type": 2 00:12:46.543 } 00:12:46.543 ], 00:12:46.543 "driver_specific": {} 00:12:46.543 } 00:12:46.543 ] 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:46.543 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:46.544 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.544 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.544 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:46.544 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.544 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:46.544 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.544 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.544 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.544 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.544 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.544 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.544 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.544 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.803 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.803 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.803 "name": "Existed_Raid", 00:12:46.803 "uuid": "d03ece9a-07ad-400f-a7f2-bdc6ad743a37", 00:12:46.803 "strip_size_kb": 64, 00:12:46.803 "state": "online", 00:12:46.803 "raid_level": "raid5f", 00:12:46.803 "superblock": false, 00:12:46.803 "num_base_bdevs": 3, 00:12:46.803 "num_base_bdevs_discovered": 3, 00:12:46.803 "num_base_bdevs_operational": 3, 00:12:46.803 "base_bdevs_list": [ 00:12:46.803 { 00:12:46.803 "name": "NewBaseBdev", 00:12:46.803 "uuid": "46ef0f96-53a7-4cd3-8230-cb181a138699", 00:12:46.803 "is_configured": true, 00:12:46.803 "data_offset": 0, 00:12:46.803 "data_size": 65536 00:12:46.803 }, 00:12:46.803 { 00:12:46.803 "name": "BaseBdev2", 00:12:46.803 "uuid": "5126ecab-66b4-4a2e-af19-bba2aef17506", 00:12:46.803 "is_configured": true, 00:12:46.803 "data_offset": 0, 00:12:46.803 "data_size": 65536 00:12:46.803 }, 00:12:46.803 { 00:12:46.803 "name": "BaseBdev3", 00:12:46.803 "uuid": "81761059-443c-49e4-b325-de0e9f1ddcb1", 00:12:46.803 "is_configured": true, 00:12:46.803 "data_offset": 0, 00:12:46.803 "data_size": 65536 00:12:46.803 } 00:12:46.803 ] 00:12:46.803 }' 00:12:46.803 21:45:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.803 21:45:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.063 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:47.063 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:47.063 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:47.063 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:47.063 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:47.063 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:47.063 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:47.063 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.063 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.063 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:47.063 [2024-11-27 21:45:10.110855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:47.063 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.063 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:47.063 "name": "Existed_Raid", 00:12:47.063 "aliases": [ 00:12:47.063 "d03ece9a-07ad-400f-a7f2-bdc6ad743a37" 00:12:47.063 ], 00:12:47.063 "product_name": "Raid Volume", 00:12:47.063 "block_size": 512, 00:12:47.063 "num_blocks": 131072, 00:12:47.063 "uuid": "d03ece9a-07ad-400f-a7f2-bdc6ad743a37", 00:12:47.063 "assigned_rate_limits": { 00:12:47.063 "rw_ios_per_sec": 0, 00:12:47.063 "rw_mbytes_per_sec": 0, 00:12:47.063 "r_mbytes_per_sec": 0, 00:12:47.063 "w_mbytes_per_sec": 0 00:12:47.063 }, 00:12:47.063 "claimed": false, 00:12:47.063 "zoned": false, 00:12:47.063 "supported_io_types": { 00:12:47.063 "read": true, 00:12:47.063 "write": true, 00:12:47.063 "unmap": false, 00:12:47.063 "flush": false, 00:12:47.063 "reset": true, 00:12:47.063 "nvme_admin": false, 00:12:47.063 "nvme_io": false, 00:12:47.063 "nvme_io_md": false, 00:12:47.063 "write_zeroes": true, 00:12:47.063 "zcopy": false, 00:12:47.063 "get_zone_info": false, 00:12:47.063 "zone_management": false, 00:12:47.063 "zone_append": false, 00:12:47.063 "compare": false, 00:12:47.063 "compare_and_write": false, 00:12:47.063 "abort": false, 00:12:47.063 "seek_hole": false, 00:12:47.063 "seek_data": false, 00:12:47.063 "copy": false, 00:12:47.063 "nvme_iov_md": false 00:12:47.063 }, 00:12:47.063 "driver_specific": { 00:12:47.063 "raid": { 00:12:47.063 "uuid": "d03ece9a-07ad-400f-a7f2-bdc6ad743a37", 00:12:47.063 "strip_size_kb": 64, 00:12:47.063 "state": "online", 00:12:47.063 "raid_level": "raid5f", 00:12:47.063 "superblock": false, 00:12:47.063 "num_base_bdevs": 3, 00:12:47.063 "num_base_bdevs_discovered": 3, 00:12:47.063 "num_base_bdevs_operational": 3, 00:12:47.063 "base_bdevs_list": [ 00:12:47.063 { 00:12:47.063 "name": "NewBaseBdev", 00:12:47.063 "uuid": "46ef0f96-53a7-4cd3-8230-cb181a138699", 00:12:47.063 "is_configured": true, 00:12:47.063 "data_offset": 0, 00:12:47.063 "data_size": 65536 00:12:47.063 }, 00:12:47.063 { 00:12:47.063 "name": "BaseBdev2", 00:12:47.063 "uuid": "5126ecab-66b4-4a2e-af19-bba2aef17506", 00:12:47.063 "is_configured": true, 00:12:47.063 "data_offset": 0, 00:12:47.063 "data_size": 65536 00:12:47.063 }, 00:12:47.063 { 00:12:47.063 "name": "BaseBdev3", 00:12:47.063 "uuid": "81761059-443c-49e4-b325-de0e9f1ddcb1", 00:12:47.063 "is_configured": true, 00:12:47.063 "data_offset": 0, 00:12:47.063 "data_size": 65536 00:12:47.063 } 00:12:47.063 ] 00:12:47.063 } 00:12:47.063 } 00:12:47.063 }' 00:12:47.063 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:47.323 BaseBdev2 00:12:47.323 BaseBdev3' 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.323 [2024-11-27 21:45:10.386148] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:47.323 [2024-11-27 21:45:10.386173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:47.323 [2024-11-27 21:45:10.386250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:47.323 [2024-11-27 21:45:10.386484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:47.323 [2024-11-27 21:45:10.386503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 90136 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 90136 ']' 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 90136 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90136 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:47.323 killing process with pid 90136 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90136' 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 90136 00:12:47.323 [2024-11-27 21:45:10.434241] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:47.323 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 90136 00:12:47.583 [2024-11-27 21:45:10.464052] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:47.583 21:45:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:47.583 00:12:47.583 real 0m8.505s 00:12:47.583 user 0m14.444s 00:12:47.583 sys 0m1.842s 00:12:47.583 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.583 21:45:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.583 ************************************ 00:12:47.583 END TEST raid5f_state_function_test 00:12:47.583 ************************************ 00:12:47.843 21:45:10 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:12:47.843 21:45:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:47.843 21:45:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.843 21:45:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:47.843 ************************************ 00:12:47.843 START TEST raid5f_state_function_test_sb 00:12:47.843 ************************************ 00:12:47.843 21:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:12:47.843 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:12:47.843 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:12:47.843 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:47.843 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:47.843 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:47.843 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.843 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:47.843 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:47.843 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.843 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:47.843 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:47.843 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.843 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=90733 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:47.844 Process raid pid: 90733 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90733' 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 90733 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 90733 ']' 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.844 21:45:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.844 [2024-11-27 21:45:10.870229] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:47.844 [2024-11-27 21:45:10.870375] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.104 [2024-11-27 21:45:11.030907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.104 [2024-11-27 21:45:11.055877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.104 [2024-11-27 21:45:11.098929] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.104 [2024-11-27 21:45:11.098957] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.673 [2024-11-27 21:45:11.693896] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:48.673 [2024-11-27 21:45:11.693943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:48.673 [2024-11-27 21:45:11.693954] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.673 [2024-11-27 21:45:11.693963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.673 [2024-11-27 21:45:11.693969] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.673 [2024-11-27 21:45:11.693981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.673 "name": "Existed_Raid", 00:12:48.673 "uuid": "5a27f1be-b01e-4eb8-978f-486bd94a9c92", 00:12:48.673 "strip_size_kb": 64, 00:12:48.673 "state": "configuring", 00:12:48.673 "raid_level": "raid5f", 00:12:48.673 "superblock": true, 00:12:48.673 "num_base_bdevs": 3, 00:12:48.673 "num_base_bdevs_discovered": 0, 00:12:48.673 "num_base_bdevs_operational": 3, 00:12:48.673 "base_bdevs_list": [ 00:12:48.673 { 00:12:48.673 "name": "BaseBdev1", 00:12:48.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.673 "is_configured": false, 00:12:48.673 "data_offset": 0, 00:12:48.673 "data_size": 0 00:12:48.673 }, 00:12:48.673 { 00:12:48.673 "name": "BaseBdev2", 00:12:48.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.673 "is_configured": false, 00:12:48.673 "data_offset": 0, 00:12:48.673 "data_size": 0 00:12:48.673 }, 00:12:48.673 { 00:12:48.673 "name": "BaseBdev3", 00:12:48.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.673 "is_configured": false, 00:12:48.673 "data_offset": 0, 00:12:48.673 "data_size": 0 00:12:48.673 } 00:12:48.673 ] 00:12:48.673 }' 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.673 21:45:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.242 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:49.242 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.242 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.242 [2024-11-27 21:45:12.133026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.242 [2024-11-27 21:45:12.133067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:12:49.242 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.242 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:49.242 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.242 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.242 [2024-11-27 21:45:12.145036] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:49.242 [2024-11-27 21:45:12.145072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:49.242 [2024-11-27 21:45:12.145081] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.242 [2024-11-27 21:45:12.145090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.242 [2024-11-27 21:45:12.145096] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:49.242 [2024-11-27 21:45:12.145105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:49.242 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.242 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:49.242 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.242 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.242 [2024-11-27 21:45:12.165779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.242 BaseBdev1 00:12:49.242 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.242 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:49.242 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.243 [ 00:12:49.243 { 00:12:49.243 "name": "BaseBdev1", 00:12:49.243 "aliases": [ 00:12:49.243 "53e23e46-3bb4-467b-823f-f40ada56fed8" 00:12:49.243 ], 00:12:49.243 "product_name": "Malloc disk", 00:12:49.243 "block_size": 512, 00:12:49.243 "num_blocks": 65536, 00:12:49.243 "uuid": "53e23e46-3bb4-467b-823f-f40ada56fed8", 00:12:49.243 "assigned_rate_limits": { 00:12:49.243 "rw_ios_per_sec": 0, 00:12:49.243 "rw_mbytes_per_sec": 0, 00:12:49.243 "r_mbytes_per_sec": 0, 00:12:49.243 "w_mbytes_per_sec": 0 00:12:49.243 }, 00:12:49.243 "claimed": true, 00:12:49.243 "claim_type": "exclusive_write", 00:12:49.243 "zoned": false, 00:12:49.243 "supported_io_types": { 00:12:49.243 "read": true, 00:12:49.243 "write": true, 00:12:49.243 "unmap": true, 00:12:49.243 "flush": true, 00:12:49.243 "reset": true, 00:12:49.243 "nvme_admin": false, 00:12:49.243 "nvme_io": false, 00:12:49.243 "nvme_io_md": false, 00:12:49.243 "write_zeroes": true, 00:12:49.243 "zcopy": true, 00:12:49.243 "get_zone_info": false, 00:12:49.243 "zone_management": false, 00:12:49.243 "zone_append": false, 00:12:49.243 "compare": false, 00:12:49.243 "compare_and_write": false, 00:12:49.243 "abort": true, 00:12:49.243 "seek_hole": false, 00:12:49.243 "seek_data": false, 00:12:49.243 "copy": true, 00:12:49.243 "nvme_iov_md": false 00:12:49.243 }, 00:12:49.243 "memory_domains": [ 00:12:49.243 { 00:12:49.243 "dma_device_id": "system", 00:12:49.243 "dma_device_type": 1 00:12:49.243 }, 00:12:49.243 { 00:12:49.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.243 "dma_device_type": 2 00:12:49.243 } 00:12:49.243 ], 00:12:49.243 "driver_specific": {} 00:12:49.243 } 00:12:49.243 ] 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.243 "name": "Existed_Raid", 00:12:49.243 "uuid": "69834ef5-5f08-4af9-82b4-309ede8e27c5", 00:12:49.243 "strip_size_kb": 64, 00:12:49.243 "state": "configuring", 00:12:49.243 "raid_level": "raid5f", 00:12:49.243 "superblock": true, 00:12:49.243 "num_base_bdevs": 3, 00:12:49.243 "num_base_bdevs_discovered": 1, 00:12:49.243 "num_base_bdevs_operational": 3, 00:12:49.243 "base_bdevs_list": [ 00:12:49.243 { 00:12:49.243 "name": "BaseBdev1", 00:12:49.243 "uuid": "53e23e46-3bb4-467b-823f-f40ada56fed8", 00:12:49.243 "is_configured": true, 00:12:49.243 "data_offset": 2048, 00:12:49.243 "data_size": 63488 00:12:49.243 }, 00:12:49.243 { 00:12:49.243 "name": "BaseBdev2", 00:12:49.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.243 "is_configured": false, 00:12:49.243 "data_offset": 0, 00:12:49.243 "data_size": 0 00:12:49.243 }, 00:12:49.243 { 00:12:49.243 "name": "BaseBdev3", 00:12:49.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.243 "is_configured": false, 00:12:49.243 "data_offset": 0, 00:12:49.243 "data_size": 0 00:12:49.243 } 00:12:49.243 ] 00:12:49.243 }' 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.243 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.502 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:49.502 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.502 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.762 [2024-11-27 21:45:12.625022] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:49.762 [2024-11-27 21:45:12.625065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.762 [2024-11-27 21:45:12.637043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:49.762 [2024-11-27 21:45:12.638865] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:49.762 [2024-11-27 21:45:12.638897] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:49.762 [2024-11-27 21:45:12.638907] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:49.762 [2024-11-27 21:45:12.638917] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.762 "name": "Existed_Raid", 00:12:49.762 "uuid": "9ad8472f-16d2-4628-bc56-eda8e2254b5e", 00:12:49.762 "strip_size_kb": 64, 00:12:49.762 "state": "configuring", 00:12:49.762 "raid_level": "raid5f", 00:12:49.762 "superblock": true, 00:12:49.762 "num_base_bdevs": 3, 00:12:49.762 "num_base_bdevs_discovered": 1, 00:12:49.762 "num_base_bdevs_operational": 3, 00:12:49.762 "base_bdevs_list": [ 00:12:49.762 { 00:12:49.762 "name": "BaseBdev1", 00:12:49.762 "uuid": "53e23e46-3bb4-467b-823f-f40ada56fed8", 00:12:49.762 "is_configured": true, 00:12:49.762 "data_offset": 2048, 00:12:49.762 "data_size": 63488 00:12:49.762 }, 00:12:49.762 { 00:12:49.762 "name": "BaseBdev2", 00:12:49.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.762 "is_configured": false, 00:12:49.762 "data_offset": 0, 00:12:49.762 "data_size": 0 00:12:49.762 }, 00:12:49.762 { 00:12:49.762 "name": "BaseBdev3", 00:12:49.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.762 "is_configured": false, 00:12:49.762 "data_offset": 0, 00:12:49.762 "data_size": 0 00:12:49.762 } 00:12:49.762 ] 00:12:49.762 }' 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.762 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.022 [2024-11-27 21:45:12.971503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:50.022 BaseBdev2 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.022 21:45:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.022 [ 00:12:50.022 { 00:12:50.022 "name": "BaseBdev2", 00:12:50.022 "aliases": [ 00:12:50.022 "dec826b2-3a57-41b0-beef-912fc7aba3ae" 00:12:50.022 ], 00:12:50.022 "product_name": "Malloc disk", 00:12:50.022 "block_size": 512, 00:12:50.022 "num_blocks": 65536, 00:12:50.022 "uuid": "dec826b2-3a57-41b0-beef-912fc7aba3ae", 00:12:50.022 "assigned_rate_limits": { 00:12:50.022 "rw_ios_per_sec": 0, 00:12:50.022 "rw_mbytes_per_sec": 0, 00:12:50.022 "r_mbytes_per_sec": 0, 00:12:50.022 "w_mbytes_per_sec": 0 00:12:50.022 }, 00:12:50.022 "claimed": true, 00:12:50.022 "claim_type": "exclusive_write", 00:12:50.022 "zoned": false, 00:12:50.022 "supported_io_types": { 00:12:50.022 "read": true, 00:12:50.022 "write": true, 00:12:50.022 "unmap": true, 00:12:50.022 "flush": true, 00:12:50.022 "reset": true, 00:12:50.022 "nvme_admin": false, 00:12:50.022 "nvme_io": false, 00:12:50.022 "nvme_io_md": false, 00:12:50.022 "write_zeroes": true, 00:12:50.023 "zcopy": true, 00:12:50.023 "get_zone_info": false, 00:12:50.023 "zone_management": false, 00:12:50.023 "zone_append": false, 00:12:50.023 "compare": false, 00:12:50.023 "compare_and_write": false, 00:12:50.023 "abort": true, 00:12:50.023 "seek_hole": false, 00:12:50.023 "seek_data": false, 00:12:50.023 "copy": true, 00:12:50.023 "nvme_iov_md": false 00:12:50.023 }, 00:12:50.023 "memory_domains": [ 00:12:50.023 { 00:12:50.023 "dma_device_id": "system", 00:12:50.023 "dma_device_type": 1 00:12:50.023 }, 00:12:50.023 { 00:12:50.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.023 "dma_device_type": 2 00:12:50.023 } 00:12:50.023 ], 00:12:50.023 "driver_specific": {} 00:12:50.023 } 00:12:50.023 ] 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.023 "name": "Existed_Raid", 00:12:50.023 "uuid": "9ad8472f-16d2-4628-bc56-eda8e2254b5e", 00:12:50.023 "strip_size_kb": 64, 00:12:50.023 "state": "configuring", 00:12:50.023 "raid_level": "raid5f", 00:12:50.023 "superblock": true, 00:12:50.023 "num_base_bdevs": 3, 00:12:50.023 "num_base_bdevs_discovered": 2, 00:12:50.023 "num_base_bdevs_operational": 3, 00:12:50.023 "base_bdevs_list": [ 00:12:50.023 { 00:12:50.023 "name": "BaseBdev1", 00:12:50.023 "uuid": "53e23e46-3bb4-467b-823f-f40ada56fed8", 00:12:50.023 "is_configured": true, 00:12:50.023 "data_offset": 2048, 00:12:50.023 "data_size": 63488 00:12:50.023 }, 00:12:50.023 { 00:12:50.023 "name": "BaseBdev2", 00:12:50.023 "uuid": "dec826b2-3a57-41b0-beef-912fc7aba3ae", 00:12:50.023 "is_configured": true, 00:12:50.023 "data_offset": 2048, 00:12:50.023 "data_size": 63488 00:12:50.023 }, 00:12:50.023 { 00:12:50.023 "name": "BaseBdev3", 00:12:50.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.023 "is_configured": false, 00:12:50.023 "data_offset": 0, 00:12:50.023 "data_size": 0 00:12:50.023 } 00:12:50.023 ] 00:12:50.023 }' 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.023 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.590 [2024-11-27 21:45:13.437050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.590 [2024-11-27 21:45:13.437644] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:50.590 [2024-11-27 21:45:13.437720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:50.590 BaseBdev3 00:12:50.590 [2024-11-27 21:45:13.438637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:50.590 [2024-11-27 21:45:13.440281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:50.590 [2024-11-27 21:45:13.440345] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:50.590 [2024-11-27 21:45:13.440880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.590 [ 00:12:50.590 { 00:12:50.590 "name": "BaseBdev3", 00:12:50.590 "aliases": [ 00:12:50.590 "15a52e82-d77e-4654-9c75-243d1e02a4b6" 00:12:50.590 ], 00:12:50.590 "product_name": "Malloc disk", 00:12:50.590 "block_size": 512, 00:12:50.590 "num_blocks": 65536, 00:12:50.590 "uuid": "15a52e82-d77e-4654-9c75-243d1e02a4b6", 00:12:50.590 "assigned_rate_limits": { 00:12:50.590 "rw_ios_per_sec": 0, 00:12:50.590 "rw_mbytes_per_sec": 0, 00:12:50.590 "r_mbytes_per_sec": 0, 00:12:50.590 "w_mbytes_per_sec": 0 00:12:50.590 }, 00:12:50.590 "claimed": true, 00:12:50.590 "claim_type": "exclusive_write", 00:12:50.590 "zoned": false, 00:12:50.590 "supported_io_types": { 00:12:50.590 "read": true, 00:12:50.590 "write": true, 00:12:50.590 "unmap": true, 00:12:50.590 "flush": true, 00:12:50.590 "reset": true, 00:12:50.590 "nvme_admin": false, 00:12:50.590 "nvme_io": false, 00:12:50.590 "nvme_io_md": false, 00:12:50.590 "write_zeroes": true, 00:12:50.590 "zcopy": true, 00:12:50.590 "get_zone_info": false, 00:12:50.590 "zone_management": false, 00:12:50.590 "zone_append": false, 00:12:50.590 "compare": false, 00:12:50.590 "compare_and_write": false, 00:12:50.590 "abort": true, 00:12:50.590 "seek_hole": false, 00:12:50.590 "seek_data": false, 00:12:50.590 "copy": true, 00:12:50.590 "nvme_iov_md": false 00:12:50.590 }, 00:12:50.590 "memory_domains": [ 00:12:50.590 { 00:12:50.590 "dma_device_id": "system", 00:12:50.590 "dma_device_type": 1 00:12:50.590 }, 00:12:50.590 { 00:12:50.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.590 "dma_device_type": 2 00:12:50.590 } 00:12:50.590 ], 00:12:50.590 "driver_specific": {} 00:12:50.590 } 00:12:50.590 ] 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.590 "name": "Existed_Raid", 00:12:50.590 "uuid": "9ad8472f-16d2-4628-bc56-eda8e2254b5e", 00:12:50.590 "strip_size_kb": 64, 00:12:50.590 "state": "online", 00:12:50.590 "raid_level": "raid5f", 00:12:50.590 "superblock": true, 00:12:50.590 "num_base_bdevs": 3, 00:12:50.590 "num_base_bdevs_discovered": 3, 00:12:50.590 "num_base_bdevs_operational": 3, 00:12:50.590 "base_bdevs_list": [ 00:12:50.590 { 00:12:50.590 "name": "BaseBdev1", 00:12:50.590 "uuid": "53e23e46-3bb4-467b-823f-f40ada56fed8", 00:12:50.590 "is_configured": true, 00:12:50.590 "data_offset": 2048, 00:12:50.590 "data_size": 63488 00:12:50.590 }, 00:12:50.590 { 00:12:50.590 "name": "BaseBdev2", 00:12:50.590 "uuid": "dec826b2-3a57-41b0-beef-912fc7aba3ae", 00:12:50.590 "is_configured": true, 00:12:50.590 "data_offset": 2048, 00:12:50.590 "data_size": 63488 00:12:50.590 }, 00:12:50.590 { 00:12:50.590 "name": "BaseBdev3", 00:12:50.590 "uuid": "15a52e82-d77e-4654-9c75-243d1e02a4b6", 00:12:50.590 "is_configured": true, 00:12:50.590 "data_offset": 2048, 00:12:50.590 "data_size": 63488 00:12:50.590 } 00:12:50.590 ] 00:12:50.590 }' 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.590 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.848 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:50.848 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:50.848 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:50.848 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:50.848 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:50.848 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:50.848 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:50.848 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.848 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.848 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:50.848 [2024-11-27 21:45:13.900417] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.848 21:45:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.848 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:50.848 "name": "Existed_Raid", 00:12:50.848 "aliases": [ 00:12:50.848 "9ad8472f-16d2-4628-bc56-eda8e2254b5e" 00:12:50.848 ], 00:12:50.848 "product_name": "Raid Volume", 00:12:50.849 "block_size": 512, 00:12:50.849 "num_blocks": 126976, 00:12:50.849 "uuid": "9ad8472f-16d2-4628-bc56-eda8e2254b5e", 00:12:50.849 "assigned_rate_limits": { 00:12:50.849 "rw_ios_per_sec": 0, 00:12:50.849 "rw_mbytes_per_sec": 0, 00:12:50.849 "r_mbytes_per_sec": 0, 00:12:50.849 "w_mbytes_per_sec": 0 00:12:50.849 }, 00:12:50.849 "claimed": false, 00:12:50.849 "zoned": false, 00:12:50.849 "supported_io_types": { 00:12:50.849 "read": true, 00:12:50.849 "write": true, 00:12:50.849 "unmap": false, 00:12:50.849 "flush": false, 00:12:50.849 "reset": true, 00:12:50.849 "nvme_admin": false, 00:12:50.849 "nvme_io": false, 00:12:50.849 "nvme_io_md": false, 00:12:50.849 "write_zeroes": true, 00:12:50.849 "zcopy": false, 00:12:50.849 "get_zone_info": false, 00:12:50.849 "zone_management": false, 00:12:50.849 "zone_append": false, 00:12:50.849 "compare": false, 00:12:50.849 "compare_and_write": false, 00:12:50.849 "abort": false, 00:12:50.849 "seek_hole": false, 00:12:50.849 "seek_data": false, 00:12:50.849 "copy": false, 00:12:50.849 "nvme_iov_md": false 00:12:50.849 }, 00:12:50.849 "driver_specific": { 00:12:50.849 "raid": { 00:12:50.849 "uuid": "9ad8472f-16d2-4628-bc56-eda8e2254b5e", 00:12:50.849 "strip_size_kb": 64, 00:12:50.849 "state": "online", 00:12:50.849 "raid_level": "raid5f", 00:12:50.849 "superblock": true, 00:12:50.849 "num_base_bdevs": 3, 00:12:50.849 "num_base_bdevs_discovered": 3, 00:12:50.849 "num_base_bdevs_operational": 3, 00:12:50.849 "base_bdevs_list": [ 00:12:50.849 { 00:12:50.849 "name": "BaseBdev1", 00:12:50.849 "uuid": "53e23e46-3bb4-467b-823f-f40ada56fed8", 00:12:50.849 "is_configured": true, 00:12:50.849 "data_offset": 2048, 00:12:50.849 "data_size": 63488 00:12:50.849 }, 00:12:50.849 { 00:12:50.849 "name": "BaseBdev2", 00:12:50.849 "uuid": "dec826b2-3a57-41b0-beef-912fc7aba3ae", 00:12:50.849 "is_configured": true, 00:12:50.849 "data_offset": 2048, 00:12:50.849 "data_size": 63488 00:12:50.849 }, 00:12:50.849 { 00:12:50.849 "name": "BaseBdev3", 00:12:50.849 "uuid": "15a52e82-d77e-4654-9c75-243d1e02a4b6", 00:12:50.849 "is_configured": true, 00:12:50.849 "data_offset": 2048, 00:12:50.849 "data_size": 63488 00:12:50.849 } 00:12:50.849 ] 00:12:50.849 } 00:12:50.849 } 00:12:50.849 }' 00:12:50.849 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:51.107 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:51.107 BaseBdev2 00:12:51.107 BaseBdev3' 00:12:51.107 21:45:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.107 [2024-11-27 21:45:14.151923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.107 "name": "Existed_Raid", 00:12:51.107 "uuid": "9ad8472f-16d2-4628-bc56-eda8e2254b5e", 00:12:51.107 "strip_size_kb": 64, 00:12:51.107 "state": "online", 00:12:51.107 "raid_level": "raid5f", 00:12:51.107 "superblock": true, 00:12:51.107 "num_base_bdevs": 3, 00:12:51.107 "num_base_bdevs_discovered": 2, 00:12:51.107 "num_base_bdevs_operational": 2, 00:12:51.107 "base_bdevs_list": [ 00:12:51.107 { 00:12:51.107 "name": null, 00:12:51.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.107 "is_configured": false, 00:12:51.107 "data_offset": 0, 00:12:51.107 "data_size": 63488 00:12:51.107 }, 00:12:51.107 { 00:12:51.107 "name": "BaseBdev2", 00:12:51.107 "uuid": "dec826b2-3a57-41b0-beef-912fc7aba3ae", 00:12:51.107 "is_configured": true, 00:12:51.107 "data_offset": 2048, 00:12:51.107 "data_size": 63488 00:12:51.107 }, 00:12:51.107 { 00:12:51.107 "name": "BaseBdev3", 00:12:51.107 "uuid": "15a52e82-d77e-4654-9c75-243d1e02a4b6", 00:12:51.107 "is_configured": true, 00:12:51.107 "data_offset": 2048, 00:12:51.107 "data_size": 63488 00:12:51.107 } 00:12:51.107 ] 00:12:51.107 }' 00:12:51.107 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.108 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.674 [2024-11-27 21:45:14.614376] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:51.674 [2024-11-27 21:45:14.614514] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.674 [2024-11-27 21:45:14.625536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.674 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 [2024-11-27 21:45:14.685426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:51.675 [2024-11-27 21:45:14.685509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 BaseBdev2 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.675 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 [ 00:12:51.675 { 00:12:51.675 "name": "BaseBdev2", 00:12:51.675 "aliases": [ 00:12:51.675 "52431ee5-785b-45cc-85d7-e4bdd4badfa2" 00:12:51.675 ], 00:12:51.675 "product_name": "Malloc disk", 00:12:51.933 "block_size": 512, 00:12:51.933 "num_blocks": 65536, 00:12:51.933 "uuid": "52431ee5-785b-45cc-85d7-e4bdd4badfa2", 00:12:51.933 "assigned_rate_limits": { 00:12:51.933 "rw_ios_per_sec": 0, 00:12:51.933 "rw_mbytes_per_sec": 0, 00:12:51.933 "r_mbytes_per_sec": 0, 00:12:51.933 "w_mbytes_per_sec": 0 00:12:51.933 }, 00:12:51.933 "claimed": false, 00:12:51.933 "zoned": false, 00:12:51.933 "supported_io_types": { 00:12:51.933 "read": true, 00:12:51.933 "write": true, 00:12:51.933 "unmap": true, 00:12:51.933 "flush": true, 00:12:51.933 "reset": true, 00:12:51.933 "nvme_admin": false, 00:12:51.933 "nvme_io": false, 00:12:51.933 "nvme_io_md": false, 00:12:51.933 "write_zeroes": true, 00:12:51.933 "zcopy": true, 00:12:51.933 "get_zone_info": false, 00:12:51.933 "zone_management": false, 00:12:51.933 "zone_append": false, 00:12:51.933 "compare": false, 00:12:51.933 "compare_and_write": false, 00:12:51.933 "abort": true, 00:12:51.933 "seek_hole": false, 00:12:51.933 "seek_data": false, 00:12:51.933 "copy": true, 00:12:51.933 "nvme_iov_md": false 00:12:51.933 }, 00:12:51.933 "memory_domains": [ 00:12:51.933 { 00:12:51.933 "dma_device_id": "system", 00:12:51.933 "dma_device_type": 1 00:12:51.933 }, 00:12:51.933 { 00:12:51.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.933 "dma_device_type": 2 00:12:51.933 } 00:12:51.933 ], 00:12:51.933 "driver_specific": {} 00:12:51.933 } 00:12:51.933 ] 00:12:51.933 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.934 BaseBdev3 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.934 [ 00:12:51.934 { 00:12:51.934 "name": "BaseBdev3", 00:12:51.934 "aliases": [ 00:12:51.934 "f0c38823-e7c3-4306-a5f4-381adb6c6ffc" 00:12:51.934 ], 00:12:51.934 "product_name": "Malloc disk", 00:12:51.934 "block_size": 512, 00:12:51.934 "num_blocks": 65536, 00:12:51.934 "uuid": "f0c38823-e7c3-4306-a5f4-381adb6c6ffc", 00:12:51.934 "assigned_rate_limits": { 00:12:51.934 "rw_ios_per_sec": 0, 00:12:51.934 "rw_mbytes_per_sec": 0, 00:12:51.934 "r_mbytes_per_sec": 0, 00:12:51.934 "w_mbytes_per_sec": 0 00:12:51.934 }, 00:12:51.934 "claimed": false, 00:12:51.934 "zoned": false, 00:12:51.934 "supported_io_types": { 00:12:51.934 "read": true, 00:12:51.934 "write": true, 00:12:51.934 "unmap": true, 00:12:51.934 "flush": true, 00:12:51.934 "reset": true, 00:12:51.934 "nvme_admin": false, 00:12:51.934 "nvme_io": false, 00:12:51.934 "nvme_io_md": false, 00:12:51.934 "write_zeroes": true, 00:12:51.934 "zcopy": true, 00:12:51.934 "get_zone_info": false, 00:12:51.934 "zone_management": false, 00:12:51.934 "zone_append": false, 00:12:51.934 "compare": false, 00:12:51.934 "compare_and_write": false, 00:12:51.934 "abort": true, 00:12:51.934 "seek_hole": false, 00:12:51.934 "seek_data": false, 00:12:51.934 "copy": true, 00:12:51.934 "nvme_iov_md": false 00:12:51.934 }, 00:12:51.934 "memory_domains": [ 00:12:51.934 { 00:12:51.934 "dma_device_id": "system", 00:12:51.934 "dma_device_type": 1 00:12:51.934 }, 00:12:51.934 { 00:12:51.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.934 "dma_device_type": 2 00:12:51.934 } 00:12:51.934 ], 00:12:51.934 "driver_specific": {} 00:12:51.934 } 00:12:51.934 ] 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.934 [2024-11-27 21:45:14.863799] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:51.934 [2024-11-27 21:45:14.863898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:51.934 [2024-11-27 21:45:14.863939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:51.934 [2024-11-27 21:45:14.865747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.934 "name": "Existed_Raid", 00:12:51.934 "uuid": "451ddbee-aee9-4b30-b8e0-08dcdb92a007", 00:12:51.934 "strip_size_kb": 64, 00:12:51.934 "state": "configuring", 00:12:51.934 "raid_level": "raid5f", 00:12:51.934 "superblock": true, 00:12:51.934 "num_base_bdevs": 3, 00:12:51.934 "num_base_bdevs_discovered": 2, 00:12:51.934 "num_base_bdevs_operational": 3, 00:12:51.934 "base_bdevs_list": [ 00:12:51.934 { 00:12:51.934 "name": "BaseBdev1", 00:12:51.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.934 "is_configured": false, 00:12:51.934 "data_offset": 0, 00:12:51.934 "data_size": 0 00:12:51.934 }, 00:12:51.934 { 00:12:51.934 "name": "BaseBdev2", 00:12:51.934 "uuid": "52431ee5-785b-45cc-85d7-e4bdd4badfa2", 00:12:51.934 "is_configured": true, 00:12:51.934 "data_offset": 2048, 00:12:51.934 "data_size": 63488 00:12:51.934 }, 00:12:51.934 { 00:12:51.934 "name": "BaseBdev3", 00:12:51.934 "uuid": "f0c38823-e7c3-4306-a5f4-381adb6c6ffc", 00:12:51.934 "is_configured": true, 00:12:51.934 "data_offset": 2048, 00:12:51.934 "data_size": 63488 00:12:51.934 } 00:12:51.934 ] 00:12:51.934 }' 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.934 21:45:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.192 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:52.192 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.192 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.450 [2024-11-27 21:45:15.311052] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.450 "name": "Existed_Raid", 00:12:52.450 "uuid": "451ddbee-aee9-4b30-b8e0-08dcdb92a007", 00:12:52.450 "strip_size_kb": 64, 00:12:52.450 "state": "configuring", 00:12:52.450 "raid_level": "raid5f", 00:12:52.450 "superblock": true, 00:12:52.450 "num_base_bdevs": 3, 00:12:52.450 "num_base_bdevs_discovered": 1, 00:12:52.450 "num_base_bdevs_operational": 3, 00:12:52.450 "base_bdevs_list": [ 00:12:52.450 { 00:12:52.450 "name": "BaseBdev1", 00:12:52.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.450 "is_configured": false, 00:12:52.450 "data_offset": 0, 00:12:52.450 "data_size": 0 00:12:52.450 }, 00:12:52.450 { 00:12:52.450 "name": null, 00:12:52.450 "uuid": "52431ee5-785b-45cc-85d7-e4bdd4badfa2", 00:12:52.450 "is_configured": false, 00:12:52.450 "data_offset": 0, 00:12:52.450 "data_size": 63488 00:12:52.450 }, 00:12:52.450 { 00:12:52.450 "name": "BaseBdev3", 00:12:52.450 "uuid": "f0c38823-e7c3-4306-a5f4-381adb6c6ffc", 00:12:52.450 "is_configured": true, 00:12:52.450 "data_offset": 2048, 00:12:52.450 "data_size": 63488 00:12:52.450 } 00:12:52.450 ] 00:12:52.450 }' 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.450 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.709 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.709 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:52.709 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.709 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.709 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.709 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:52.709 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:52.709 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.709 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.709 [2024-11-27 21:45:15.829015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.709 BaseBdev1 00:12:52.967 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.967 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:52.967 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:52.967 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.967 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:52.967 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.967 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.967 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.967 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.967 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.968 [ 00:12:52.968 { 00:12:52.968 "name": "BaseBdev1", 00:12:52.968 "aliases": [ 00:12:52.968 "d4374da4-4512-476b-83b6-9d863737d400" 00:12:52.968 ], 00:12:52.968 "product_name": "Malloc disk", 00:12:52.968 "block_size": 512, 00:12:52.968 "num_blocks": 65536, 00:12:52.968 "uuid": "d4374da4-4512-476b-83b6-9d863737d400", 00:12:52.968 "assigned_rate_limits": { 00:12:52.968 "rw_ios_per_sec": 0, 00:12:52.968 "rw_mbytes_per_sec": 0, 00:12:52.968 "r_mbytes_per_sec": 0, 00:12:52.968 "w_mbytes_per_sec": 0 00:12:52.968 }, 00:12:52.968 "claimed": true, 00:12:52.968 "claim_type": "exclusive_write", 00:12:52.968 "zoned": false, 00:12:52.968 "supported_io_types": { 00:12:52.968 "read": true, 00:12:52.968 "write": true, 00:12:52.968 "unmap": true, 00:12:52.968 "flush": true, 00:12:52.968 "reset": true, 00:12:52.968 "nvme_admin": false, 00:12:52.968 "nvme_io": false, 00:12:52.968 "nvme_io_md": false, 00:12:52.968 "write_zeroes": true, 00:12:52.968 "zcopy": true, 00:12:52.968 "get_zone_info": false, 00:12:52.968 "zone_management": false, 00:12:52.968 "zone_append": false, 00:12:52.968 "compare": false, 00:12:52.968 "compare_and_write": false, 00:12:52.968 "abort": true, 00:12:52.968 "seek_hole": false, 00:12:52.968 "seek_data": false, 00:12:52.968 "copy": true, 00:12:52.968 "nvme_iov_md": false 00:12:52.968 }, 00:12:52.968 "memory_domains": [ 00:12:52.968 { 00:12:52.968 "dma_device_id": "system", 00:12:52.968 "dma_device_type": 1 00:12:52.968 }, 00:12:52.968 { 00:12:52.968 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.968 "dma_device_type": 2 00:12:52.968 } 00:12:52.968 ], 00:12:52.968 "driver_specific": {} 00:12:52.968 } 00:12:52.968 ] 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.968 "name": "Existed_Raid", 00:12:52.968 "uuid": "451ddbee-aee9-4b30-b8e0-08dcdb92a007", 00:12:52.968 "strip_size_kb": 64, 00:12:52.968 "state": "configuring", 00:12:52.968 "raid_level": "raid5f", 00:12:52.968 "superblock": true, 00:12:52.968 "num_base_bdevs": 3, 00:12:52.968 "num_base_bdevs_discovered": 2, 00:12:52.968 "num_base_bdevs_operational": 3, 00:12:52.968 "base_bdevs_list": [ 00:12:52.968 { 00:12:52.968 "name": "BaseBdev1", 00:12:52.968 "uuid": "d4374da4-4512-476b-83b6-9d863737d400", 00:12:52.968 "is_configured": true, 00:12:52.968 "data_offset": 2048, 00:12:52.968 "data_size": 63488 00:12:52.968 }, 00:12:52.968 { 00:12:52.968 "name": null, 00:12:52.968 "uuid": "52431ee5-785b-45cc-85d7-e4bdd4badfa2", 00:12:52.968 "is_configured": false, 00:12:52.968 "data_offset": 0, 00:12:52.968 "data_size": 63488 00:12:52.968 }, 00:12:52.968 { 00:12:52.968 "name": "BaseBdev3", 00:12:52.968 "uuid": "f0c38823-e7c3-4306-a5f4-381adb6c6ffc", 00:12:52.968 "is_configured": true, 00:12:52.968 "data_offset": 2048, 00:12:52.968 "data_size": 63488 00:12:52.968 } 00:12:52.968 ] 00:12:52.968 }' 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.968 21:45:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.227 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.227 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:53.227 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.227 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.227 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.487 [2024-11-27 21:45:16.352201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.487 "name": "Existed_Raid", 00:12:53.487 "uuid": "451ddbee-aee9-4b30-b8e0-08dcdb92a007", 00:12:53.487 "strip_size_kb": 64, 00:12:53.487 "state": "configuring", 00:12:53.487 "raid_level": "raid5f", 00:12:53.487 "superblock": true, 00:12:53.487 "num_base_bdevs": 3, 00:12:53.487 "num_base_bdevs_discovered": 1, 00:12:53.487 "num_base_bdevs_operational": 3, 00:12:53.487 "base_bdevs_list": [ 00:12:53.487 { 00:12:53.487 "name": "BaseBdev1", 00:12:53.487 "uuid": "d4374da4-4512-476b-83b6-9d863737d400", 00:12:53.487 "is_configured": true, 00:12:53.487 "data_offset": 2048, 00:12:53.487 "data_size": 63488 00:12:53.487 }, 00:12:53.487 { 00:12:53.487 "name": null, 00:12:53.487 "uuid": "52431ee5-785b-45cc-85d7-e4bdd4badfa2", 00:12:53.487 "is_configured": false, 00:12:53.487 "data_offset": 0, 00:12:53.487 "data_size": 63488 00:12:53.487 }, 00:12:53.487 { 00:12:53.487 "name": null, 00:12:53.487 "uuid": "f0c38823-e7c3-4306-a5f4-381adb6c6ffc", 00:12:53.487 "is_configured": false, 00:12:53.487 "data_offset": 0, 00:12:53.487 "data_size": 63488 00:12:53.487 } 00:12:53.487 ] 00:12:53.487 }' 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.487 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.747 [2024-11-27 21:45:16.827532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.747 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.006 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.006 "name": "Existed_Raid", 00:12:54.006 "uuid": "451ddbee-aee9-4b30-b8e0-08dcdb92a007", 00:12:54.006 "strip_size_kb": 64, 00:12:54.006 "state": "configuring", 00:12:54.006 "raid_level": "raid5f", 00:12:54.006 "superblock": true, 00:12:54.006 "num_base_bdevs": 3, 00:12:54.006 "num_base_bdevs_discovered": 2, 00:12:54.006 "num_base_bdevs_operational": 3, 00:12:54.006 "base_bdevs_list": [ 00:12:54.006 { 00:12:54.006 "name": "BaseBdev1", 00:12:54.006 "uuid": "d4374da4-4512-476b-83b6-9d863737d400", 00:12:54.006 "is_configured": true, 00:12:54.006 "data_offset": 2048, 00:12:54.006 "data_size": 63488 00:12:54.006 }, 00:12:54.006 { 00:12:54.006 "name": null, 00:12:54.006 "uuid": "52431ee5-785b-45cc-85d7-e4bdd4badfa2", 00:12:54.006 "is_configured": false, 00:12:54.006 "data_offset": 0, 00:12:54.006 "data_size": 63488 00:12:54.006 }, 00:12:54.006 { 00:12:54.006 "name": "BaseBdev3", 00:12:54.006 "uuid": "f0c38823-e7c3-4306-a5f4-381adb6c6ffc", 00:12:54.006 "is_configured": true, 00:12:54.006 "data_offset": 2048, 00:12:54.006 "data_size": 63488 00:12:54.006 } 00:12:54.006 ] 00:12:54.006 }' 00:12:54.006 21:45:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.006 21:45:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.267 [2024-11-27 21:45:17.322679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.267 "name": "Existed_Raid", 00:12:54.267 "uuid": "451ddbee-aee9-4b30-b8e0-08dcdb92a007", 00:12:54.267 "strip_size_kb": 64, 00:12:54.267 "state": "configuring", 00:12:54.267 "raid_level": "raid5f", 00:12:54.267 "superblock": true, 00:12:54.267 "num_base_bdevs": 3, 00:12:54.267 "num_base_bdevs_discovered": 1, 00:12:54.267 "num_base_bdevs_operational": 3, 00:12:54.267 "base_bdevs_list": [ 00:12:54.267 { 00:12:54.267 "name": null, 00:12:54.267 "uuid": "d4374da4-4512-476b-83b6-9d863737d400", 00:12:54.267 "is_configured": false, 00:12:54.267 "data_offset": 0, 00:12:54.267 "data_size": 63488 00:12:54.267 }, 00:12:54.267 { 00:12:54.267 "name": null, 00:12:54.267 "uuid": "52431ee5-785b-45cc-85d7-e4bdd4badfa2", 00:12:54.267 "is_configured": false, 00:12:54.267 "data_offset": 0, 00:12:54.267 "data_size": 63488 00:12:54.267 }, 00:12:54.267 { 00:12:54.267 "name": "BaseBdev3", 00:12:54.267 "uuid": "f0c38823-e7c3-4306-a5f4-381adb6c6ffc", 00:12:54.267 "is_configured": true, 00:12:54.267 "data_offset": 2048, 00:12:54.267 "data_size": 63488 00:12:54.267 } 00:12:54.267 ] 00:12:54.267 }' 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.267 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.836 [2024-11-27 21:45:17.800428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.836 "name": "Existed_Raid", 00:12:54.836 "uuid": "451ddbee-aee9-4b30-b8e0-08dcdb92a007", 00:12:54.836 "strip_size_kb": 64, 00:12:54.836 "state": "configuring", 00:12:54.836 "raid_level": "raid5f", 00:12:54.836 "superblock": true, 00:12:54.836 "num_base_bdevs": 3, 00:12:54.836 "num_base_bdevs_discovered": 2, 00:12:54.836 "num_base_bdevs_operational": 3, 00:12:54.836 "base_bdevs_list": [ 00:12:54.836 { 00:12:54.836 "name": null, 00:12:54.836 "uuid": "d4374da4-4512-476b-83b6-9d863737d400", 00:12:54.836 "is_configured": false, 00:12:54.836 "data_offset": 0, 00:12:54.836 "data_size": 63488 00:12:54.836 }, 00:12:54.836 { 00:12:54.836 "name": "BaseBdev2", 00:12:54.836 "uuid": "52431ee5-785b-45cc-85d7-e4bdd4badfa2", 00:12:54.836 "is_configured": true, 00:12:54.836 "data_offset": 2048, 00:12:54.836 "data_size": 63488 00:12:54.836 }, 00:12:54.836 { 00:12:54.836 "name": "BaseBdev3", 00:12:54.836 "uuid": "f0c38823-e7c3-4306-a5f4-381adb6c6ffc", 00:12:54.836 "is_configured": true, 00:12:54.836 "data_offset": 2048, 00:12:54.836 "data_size": 63488 00:12:54.836 } 00:12:54.836 ] 00:12:54.836 }' 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.836 21:45:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d4374da4-4512-476b-83b6-9d863737d400 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.405 NewBaseBdev 00:12:55.405 [2024-11-27 21:45:18.374395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:55.405 [2024-11-27 21:45:18.374564] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:12:55.405 [2024-11-27 21:45:18.374580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:55.405 [2024-11-27 21:45:18.374825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:12:55.405 [2024-11-27 21:45:18.375266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:12:55.405 [2024-11-27 21:45:18.375330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:12:55.405 [2024-11-27 21:45:18.375457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.405 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.405 [ 00:12:55.405 { 00:12:55.405 "name": "NewBaseBdev", 00:12:55.405 "aliases": [ 00:12:55.405 "d4374da4-4512-476b-83b6-9d863737d400" 00:12:55.405 ], 00:12:55.405 "product_name": "Malloc disk", 00:12:55.405 "block_size": 512, 00:12:55.405 "num_blocks": 65536, 00:12:55.405 "uuid": "d4374da4-4512-476b-83b6-9d863737d400", 00:12:55.405 "assigned_rate_limits": { 00:12:55.405 "rw_ios_per_sec": 0, 00:12:55.405 "rw_mbytes_per_sec": 0, 00:12:55.405 "r_mbytes_per_sec": 0, 00:12:55.405 "w_mbytes_per_sec": 0 00:12:55.405 }, 00:12:55.405 "claimed": true, 00:12:55.405 "claim_type": "exclusive_write", 00:12:55.405 "zoned": false, 00:12:55.405 "supported_io_types": { 00:12:55.405 "read": true, 00:12:55.405 "write": true, 00:12:55.405 "unmap": true, 00:12:55.405 "flush": true, 00:12:55.405 "reset": true, 00:12:55.405 "nvme_admin": false, 00:12:55.405 "nvme_io": false, 00:12:55.405 "nvme_io_md": false, 00:12:55.405 "write_zeroes": true, 00:12:55.405 "zcopy": true, 00:12:55.405 "get_zone_info": false, 00:12:55.405 "zone_management": false, 00:12:55.405 "zone_append": false, 00:12:55.405 "compare": false, 00:12:55.405 "compare_and_write": false, 00:12:55.405 "abort": true, 00:12:55.405 "seek_hole": false, 00:12:55.405 "seek_data": false, 00:12:55.405 "copy": true, 00:12:55.405 "nvme_iov_md": false 00:12:55.405 }, 00:12:55.405 "memory_domains": [ 00:12:55.405 { 00:12:55.405 "dma_device_id": "system", 00:12:55.405 "dma_device_type": 1 00:12:55.405 }, 00:12:55.405 { 00:12:55.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.405 "dma_device_type": 2 00:12:55.405 } 00:12:55.405 ], 00:12:55.405 "driver_specific": {} 00:12:55.405 } 00:12:55.405 ] 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.406 "name": "Existed_Raid", 00:12:55.406 "uuid": "451ddbee-aee9-4b30-b8e0-08dcdb92a007", 00:12:55.406 "strip_size_kb": 64, 00:12:55.406 "state": "online", 00:12:55.406 "raid_level": "raid5f", 00:12:55.406 "superblock": true, 00:12:55.406 "num_base_bdevs": 3, 00:12:55.406 "num_base_bdevs_discovered": 3, 00:12:55.406 "num_base_bdevs_operational": 3, 00:12:55.406 "base_bdevs_list": [ 00:12:55.406 { 00:12:55.406 "name": "NewBaseBdev", 00:12:55.406 "uuid": "d4374da4-4512-476b-83b6-9d863737d400", 00:12:55.406 "is_configured": true, 00:12:55.406 "data_offset": 2048, 00:12:55.406 "data_size": 63488 00:12:55.406 }, 00:12:55.406 { 00:12:55.406 "name": "BaseBdev2", 00:12:55.406 "uuid": "52431ee5-785b-45cc-85d7-e4bdd4badfa2", 00:12:55.406 "is_configured": true, 00:12:55.406 "data_offset": 2048, 00:12:55.406 "data_size": 63488 00:12:55.406 }, 00:12:55.406 { 00:12:55.406 "name": "BaseBdev3", 00:12:55.406 "uuid": "f0c38823-e7c3-4306-a5f4-381adb6c6ffc", 00:12:55.406 "is_configured": true, 00:12:55.406 "data_offset": 2048, 00:12:55.406 "data_size": 63488 00:12:55.406 } 00:12:55.406 ] 00:12:55.406 }' 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.406 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.976 [2024-11-27 21:45:18.837900] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:55.976 "name": "Existed_Raid", 00:12:55.976 "aliases": [ 00:12:55.976 "451ddbee-aee9-4b30-b8e0-08dcdb92a007" 00:12:55.976 ], 00:12:55.976 "product_name": "Raid Volume", 00:12:55.976 "block_size": 512, 00:12:55.976 "num_blocks": 126976, 00:12:55.976 "uuid": "451ddbee-aee9-4b30-b8e0-08dcdb92a007", 00:12:55.976 "assigned_rate_limits": { 00:12:55.976 "rw_ios_per_sec": 0, 00:12:55.976 "rw_mbytes_per_sec": 0, 00:12:55.976 "r_mbytes_per_sec": 0, 00:12:55.976 "w_mbytes_per_sec": 0 00:12:55.976 }, 00:12:55.976 "claimed": false, 00:12:55.976 "zoned": false, 00:12:55.976 "supported_io_types": { 00:12:55.976 "read": true, 00:12:55.976 "write": true, 00:12:55.976 "unmap": false, 00:12:55.976 "flush": false, 00:12:55.976 "reset": true, 00:12:55.976 "nvme_admin": false, 00:12:55.976 "nvme_io": false, 00:12:55.976 "nvme_io_md": false, 00:12:55.976 "write_zeroes": true, 00:12:55.976 "zcopy": false, 00:12:55.976 "get_zone_info": false, 00:12:55.976 "zone_management": false, 00:12:55.976 "zone_append": false, 00:12:55.976 "compare": false, 00:12:55.976 "compare_and_write": false, 00:12:55.976 "abort": false, 00:12:55.976 "seek_hole": false, 00:12:55.976 "seek_data": false, 00:12:55.976 "copy": false, 00:12:55.976 "nvme_iov_md": false 00:12:55.976 }, 00:12:55.976 "driver_specific": { 00:12:55.976 "raid": { 00:12:55.976 "uuid": "451ddbee-aee9-4b30-b8e0-08dcdb92a007", 00:12:55.976 "strip_size_kb": 64, 00:12:55.976 "state": "online", 00:12:55.976 "raid_level": "raid5f", 00:12:55.976 "superblock": true, 00:12:55.976 "num_base_bdevs": 3, 00:12:55.976 "num_base_bdevs_discovered": 3, 00:12:55.976 "num_base_bdevs_operational": 3, 00:12:55.976 "base_bdevs_list": [ 00:12:55.976 { 00:12:55.976 "name": "NewBaseBdev", 00:12:55.976 "uuid": "d4374da4-4512-476b-83b6-9d863737d400", 00:12:55.976 "is_configured": true, 00:12:55.976 "data_offset": 2048, 00:12:55.976 "data_size": 63488 00:12:55.976 }, 00:12:55.976 { 00:12:55.976 "name": "BaseBdev2", 00:12:55.976 "uuid": "52431ee5-785b-45cc-85d7-e4bdd4badfa2", 00:12:55.976 "is_configured": true, 00:12:55.976 "data_offset": 2048, 00:12:55.976 "data_size": 63488 00:12:55.976 }, 00:12:55.976 { 00:12:55.976 "name": "BaseBdev3", 00:12:55.976 "uuid": "f0c38823-e7c3-4306-a5f4-381adb6c6ffc", 00:12:55.976 "is_configured": true, 00:12:55.976 "data_offset": 2048, 00:12:55.976 "data_size": 63488 00:12:55.976 } 00:12:55.976 ] 00:12:55.976 } 00:12:55.976 } 00:12:55.976 }' 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:55.976 BaseBdev2 00:12:55.976 BaseBdev3' 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.976 21:45:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.977 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.977 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.977 21:45:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.977 21:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:55.977 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.977 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.977 21:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.977 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.977 21:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.977 21:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.977 21:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.977 21:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:55.977 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.977 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.977 21:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.977 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.246 [2024-11-27 21:45:19.105236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:56.246 [2024-11-27 21:45:19.105301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.246 [2024-11-27 21:45:19.105373] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.246 [2024-11-27 21:45:19.105649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.246 [2024-11-27 21:45:19.105671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 90733 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 90733 ']' 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 90733 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90733 00:12:56.246 killing process with pid 90733 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90733' 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 90733 00:12:56.246 [2024-11-27 21:45:19.152627] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:56.246 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 90733 00:12:56.246 [2024-11-27 21:45:19.182227] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:56.519 ************************************ 00:12:56.519 END TEST raid5f_state_function_test_sb 00:12:56.519 ************************************ 00:12:56.519 21:45:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:56.519 00:12:56.519 real 0m8.634s 00:12:56.519 user 0m14.697s 00:12:56.519 sys 0m1.802s 00:12:56.519 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.519 21:45:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.519 21:45:19 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:12:56.519 21:45:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:56.519 21:45:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.519 21:45:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:56.519 ************************************ 00:12:56.519 START TEST raid5f_superblock_test 00:12:56.519 ************************************ 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=91337 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 91337 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 91337 ']' 00:12:56.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.519 21:45:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.519 [2024-11-27 21:45:19.560278] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:12:56.519 [2024-11-27 21:45:19.560496] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91337 ] 00:12:56.779 [2024-11-27 21:45:19.714178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.779 [2024-11-27 21:45:19.738755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.779 [2024-11-27 21:45:19.780800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.779 [2024-11-27 21:45:19.780923] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.349 malloc1 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.349 [2024-11-27 21:45:20.408426] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:57.349 [2024-11-27 21:45:20.408528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.349 [2024-11-27 21:45:20.408571] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:12:57.349 [2024-11-27 21:45:20.408607] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.349 [2024-11-27 21:45:20.410729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.349 [2024-11-27 21:45:20.410811] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:57.349 pt1 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.349 malloc2 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.349 [2024-11-27 21:45:20.436727] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:57.349 [2024-11-27 21:45:20.436840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.349 [2024-11-27 21:45:20.436877] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:57.349 [2024-11-27 21:45:20.436917] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.349 [2024-11-27 21:45:20.438975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.349 [2024-11-27 21:45:20.439010] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:57.349 pt2 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.349 malloc3 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.349 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.349 [2024-11-27 21:45:20.465005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:57.349 [2024-11-27 21:45:20.465093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.349 [2024-11-27 21:45:20.465129] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:57.349 [2024-11-27 21:45:20.465158] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.349 [2024-11-27 21:45:20.467284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.349 [2024-11-27 21:45:20.467355] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:57.609 pt3 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.609 [2024-11-27 21:45:20.477051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:57.609 [2024-11-27 21:45:20.478928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:57.609 [2024-11-27 21:45:20.479017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:57.609 [2024-11-27 21:45:20.479240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:12:57.609 [2024-11-27 21:45:20.479295] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:57.609 [2024-11-27 21:45:20.479585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:12:57.609 [2024-11-27 21:45:20.480073] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:12:57.609 [2024-11-27 21:45:20.480124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:12:57.609 [2024-11-27 21:45:20.480300] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.609 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.609 "name": "raid_bdev1", 00:12:57.609 "uuid": "09bfe42f-1934-4f81-b81d-65218853f5a0", 00:12:57.609 "strip_size_kb": 64, 00:12:57.610 "state": "online", 00:12:57.610 "raid_level": "raid5f", 00:12:57.610 "superblock": true, 00:12:57.610 "num_base_bdevs": 3, 00:12:57.610 "num_base_bdevs_discovered": 3, 00:12:57.610 "num_base_bdevs_operational": 3, 00:12:57.610 "base_bdevs_list": [ 00:12:57.610 { 00:12:57.610 "name": "pt1", 00:12:57.610 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.610 "is_configured": true, 00:12:57.610 "data_offset": 2048, 00:12:57.610 "data_size": 63488 00:12:57.610 }, 00:12:57.610 { 00:12:57.610 "name": "pt2", 00:12:57.610 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.610 "is_configured": true, 00:12:57.610 "data_offset": 2048, 00:12:57.610 "data_size": 63488 00:12:57.610 }, 00:12:57.610 { 00:12:57.610 "name": "pt3", 00:12:57.610 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.610 "is_configured": true, 00:12:57.610 "data_offset": 2048, 00:12:57.610 "data_size": 63488 00:12:57.610 } 00:12:57.610 ] 00:12:57.610 }' 00:12:57.610 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.610 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.870 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:57.870 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:57.870 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:57.870 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:57.870 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:57.870 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:57.870 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:57.870 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:57.870 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.870 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.870 [2024-11-27 21:45:20.897139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.870 21:45:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.870 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:57.870 "name": "raid_bdev1", 00:12:57.870 "aliases": [ 00:12:57.870 "09bfe42f-1934-4f81-b81d-65218853f5a0" 00:12:57.870 ], 00:12:57.870 "product_name": "Raid Volume", 00:12:57.870 "block_size": 512, 00:12:57.870 "num_blocks": 126976, 00:12:57.870 "uuid": "09bfe42f-1934-4f81-b81d-65218853f5a0", 00:12:57.870 "assigned_rate_limits": { 00:12:57.870 "rw_ios_per_sec": 0, 00:12:57.870 "rw_mbytes_per_sec": 0, 00:12:57.870 "r_mbytes_per_sec": 0, 00:12:57.870 "w_mbytes_per_sec": 0 00:12:57.870 }, 00:12:57.870 "claimed": false, 00:12:57.870 "zoned": false, 00:12:57.870 "supported_io_types": { 00:12:57.870 "read": true, 00:12:57.870 "write": true, 00:12:57.870 "unmap": false, 00:12:57.870 "flush": false, 00:12:57.870 "reset": true, 00:12:57.870 "nvme_admin": false, 00:12:57.870 "nvme_io": false, 00:12:57.870 "nvme_io_md": false, 00:12:57.870 "write_zeroes": true, 00:12:57.870 "zcopy": false, 00:12:57.870 "get_zone_info": false, 00:12:57.870 "zone_management": false, 00:12:57.870 "zone_append": false, 00:12:57.870 "compare": false, 00:12:57.870 "compare_and_write": false, 00:12:57.870 "abort": false, 00:12:57.870 "seek_hole": false, 00:12:57.870 "seek_data": false, 00:12:57.870 "copy": false, 00:12:57.870 "nvme_iov_md": false 00:12:57.870 }, 00:12:57.870 "driver_specific": { 00:12:57.870 "raid": { 00:12:57.870 "uuid": "09bfe42f-1934-4f81-b81d-65218853f5a0", 00:12:57.870 "strip_size_kb": 64, 00:12:57.870 "state": "online", 00:12:57.870 "raid_level": "raid5f", 00:12:57.870 "superblock": true, 00:12:57.870 "num_base_bdevs": 3, 00:12:57.871 "num_base_bdevs_discovered": 3, 00:12:57.871 "num_base_bdevs_operational": 3, 00:12:57.871 "base_bdevs_list": [ 00:12:57.871 { 00:12:57.871 "name": "pt1", 00:12:57.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:57.871 "is_configured": true, 00:12:57.871 "data_offset": 2048, 00:12:57.871 "data_size": 63488 00:12:57.871 }, 00:12:57.871 { 00:12:57.871 "name": "pt2", 00:12:57.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:57.871 "is_configured": true, 00:12:57.871 "data_offset": 2048, 00:12:57.871 "data_size": 63488 00:12:57.871 }, 00:12:57.871 { 00:12:57.871 "name": "pt3", 00:12:57.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:57.871 "is_configured": true, 00:12:57.871 "data_offset": 2048, 00:12:57.871 "data_size": 63488 00:12:57.871 } 00:12:57.871 ] 00:12:57.871 } 00:12:57.871 } 00:12:57.871 }' 00:12:57.871 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.871 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:57.871 pt2 00:12:57.871 pt3' 00:12:57.871 21:45:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.130 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.131 [2024-11-27 21:45:21.144665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=09bfe42f-1934-4f81-b81d-65218853f5a0 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 09bfe42f-1934-4f81-b81d-65218853f5a0 ']' 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.131 [2024-11-27 21:45:21.188414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.131 [2024-11-27 21:45:21.188473] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.131 [2024-11-27 21:45:21.188564] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.131 [2024-11-27 21:45:21.188671] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.131 [2024-11-27 21:45:21.188766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.131 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.391 [2024-11-27 21:45:21.344220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:58.391 [2024-11-27 21:45:21.346164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:58.391 [2024-11-27 21:45:21.346246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:58.391 [2024-11-27 21:45:21.346313] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:58.391 [2024-11-27 21:45:21.346408] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:58.391 [2024-11-27 21:45:21.346471] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:58.391 [2024-11-27 21:45:21.346533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.391 [2024-11-27 21:45:21.346601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:12:58.391 request: 00:12:58.391 { 00:12:58.391 "name": "raid_bdev1", 00:12:58.391 "raid_level": "raid5f", 00:12:58.391 "base_bdevs": [ 00:12:58.391 "malloc1", 00:12:58.391 "malloc2", 00:12:58.391 "malloc3" 00:12:58.391 ], 00:12:58.391 "strip_size_kb": 64, 00:12:58.391 "superblock": false, 00:12:58.391 "method": "bdev_raid_create", 00:12:58.391 "req_id": 1 00:12:58.391 } 00:12:58.391 Got JSON-RPC error response 00:12:58.391 response: 00:12:58.391 { 00:12:58.391 "code": -17, 00:12:58.391 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:58.391 } 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.391 [2024-11-27 21:45:21.408144] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:58.391 [2024-11-27 21:45:21.408226] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.391 [2024-11-27 21:45:21.408259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:58.391 [2024-11-27 21:45:21.408306] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.391 [2024-11-27 21:45:21.410476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.391 [2024-11-27 21:45:21.410545] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:58.391 [2024-11-27 21:45:21.410636] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:58.391 [2024-11-27 21:45:21.410721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:58.391 pt1 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.391 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.391 "name": "raid_bdev1", 00:12:58.391 "uuid": "09bfe42f-1934-4f81-b81d-65218853f5a0", 00:12:58.391 "strip_size_kb": 64, 00:12:58.391 "state": "configuring", 00:12:58.391 "raid_level": "raid5f", 00:12:58.391 "superblock": true, 00:12:58.391 "num_base_bdevs": 3, 00:12:58.391 "num_base_bdevs_discovered": 1, 00:12:58.392 "num_base_bdevs_operational": 3, 00:12:58.392 "base_bdevs_list": [ 00:12:58.392 { 00:12:58.392 "name": "pt1", 00:12:58.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.392 "is_configured": true, 00:12:58.392 "data_offset": 2048, 00:12:58.392 "data_size": 63488 00:12:58.392 }, 00:12:58.392 { 00:12:58.392 "name": null, 00:12:58.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.392 "is_configured": false, 00:12:58.392 "data_offset": 2048, 00:12:58.392 "data_size": 63488 00:12:58.392 }, 00:12:58.392 { 00:12:58.392 "name": null, 00:12:58.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.392 "is_configured": false, 00:12:58.392 "data_offset": 2048, 00:12:58.392 "data_size": 63488 00:12:58.392 } 00:12:58.392 ] 00:12:58.392 }' 00:12:58.392 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.392 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.959 [2024-11-27 21:45:21.867378] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:58.959 [2024-11-27 21:45:21.867484] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.959 [2024-11-27 21:45:21.867510] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:58.959 [2024-11-27 21:45:21.867524] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.959 [2024-11-27 21:45:21.867947] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.959 [2024-11-27 21:45:21.867968] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:58.959 [2024-11-27 21:45:21.868038] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:58.959 [2024-11-27 21:45:21.868069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:58.959 pt2 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.959 [2024-11-27 21:45:21.875363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.959 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.959 "name": "raid_bdev1", 00:12:58.959 "uuid": "09bfe42f-1934-4f81-b81d-65218853f5a0", 00:12:58.959 "strip_size_kb": 64, 00:12:58.959 "state": "configuring", 00:12:58.959 "raid_level": "raid5f", 00:12:58.959 "superblock": true, 00:12:58.959 "num_base_bdevs": 3, 00:12:58.959 "num_base_bdevs_discovered": 1, 00:12:58.959 "num_base_bdevs_operational": 3, 00:12:58.959 "base_bdevs_list": [ 00:12:58.959 { 00:12:58.959 "name": "pt1", 00:12:58.960 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:58.960 "is_configured": true, 00:12:58.960 "data_offset": 2048, 00:12:58.960 "data_size": 63488 00:12:58.960 }, 00:12:58.960 { 00:12:58.960 "name": null, 00:12:58.960 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:58.960 "is_configured": false, 00:12:58.960 "data_offset": 0, 00:12:58.960 "data_size": 63488 00:12:58.960 }, 00:12:58.960 { 00:12:58.960 "name": null, 00:12:58.960 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:58.960 "is_configured": false, 00:12:58.960 "data_offset": 2048, 00:12:58.960 "data_size": 63488 00:12:58.960 } 00:12:58.960 ] 00:12:58.960 }' 00:12:58.960 21:45:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.960 21:45:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.528 [2024-11-27 21:45:22.362533] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:59.528 [2024-11-27 21:45:22.362620] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.528 [2024-11-27 21:45:22.362657] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:59.528 [2024-11-27 21:45:22.362683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.528 [2024-11-27 21:45:22.363110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.528 [2024-11-27 21:45:22.363164] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:59.528 [2024-11-27 21:45:22.363267] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:59.528 [2024-11-27 21:45:22.363315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:59.528 pt2 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.528 [2024-11-27 21:45:22.374512] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:59.528 [2024-11-27 21:45:22.374552] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:59.528 [2024-11-27 21:45:22.374569] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:59.528 [2024-11-27 21:45:22.374577] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:59.528 [2024-11-27 21:45:22.374932] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:59.528 [2024-11-27 21:45:22.374953] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:59.528 [2024-11-27 21:45:22.375018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:59.528 [2024-11-27 21:45:22.375036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:59.528 [2024-11-27 21:45:22.375131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:12:59.528 [2024-11-27 21:45:22.375149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:12:59.528 [2024-11-27 21:45:22.375377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:12:59.528 [2024-11-27 21:45:22.375776] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:12:59.528 [2024-11-27 21:45:22.375821] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:12:59.528 [2024-11-27 21:45:22.375925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.528 pt3 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.528 "name": "raid_bdev1", 00:12:59.528 "uuid": "09bfe42f-1934-4f81-b81d-65218853f5a0", 00:12:59.528 "strip_size_kb": 64, 00:12:59.528 "state": "online", 00:12:59.528 "raid_level": "raid5f", 00:12:59.528 "superblock": true, 00:12:59.528 "num_base_bdevs": 3, 00:12:59.528 "num_base_bdevs_discovered": 3, 00:12:59.528 "num_base_bdevs_operational": 3, 00:12:59.528 "base_bdevs_list": [ 00:12:59.528 { 00:12:59.528 "name": "pt1", 00:12:59.528 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.528 "is_configured": true, 00:12:59.528 "data_offset": 2048, 00:12:59.528 "data_size": 63488 00:12:59.528 }, 00:12:59.528 { 00:12:59.528 "name": "pt2", 00:12:59.528 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.528 "is_configured": true, 00:12:59.528 "data_offset": 2048, 00:12:59.528 "data_size": 63488 00:12:59.528 }, 00:12:59.528 { 00:12:59.528 "name": "pt3", 00:12:59.528 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.528 "is_configured": true, 00:12:59.528 "data_offset": 2048, 00:12:59.528 "data_size": 63488 00:12:59.528 } 00:12:59.528 ] 00:12:59.528 }' 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.528 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.787 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:59.787 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:59.787 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:59.787 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:59.787 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:59.787 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:59.787 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:59.787 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:59.787 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.787 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.788 [2024-11-27 21:45:22.766018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:59.788 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.788 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:59.788 "name": "raid_bdev1", 00:12:59.788 "aliases": [ 00:12:59.788 "09bfe42f-1934-4f81-b81d-65218853f5a0" 00:12:59.788 ], 00:12:59.788 "product_name": "Raid Volume", 00:12:59.788 "block_size": 512, 00:12:59.788 "num_blocks": 126976, 00:12:59.788 "uuid": "09bfe42f-1934-4f81-b81d-65218853f5a0", 00:12:59.788 "assigned_rate_limits": { 00:12:59.788 "rw_ios_per_sec": 0, 00:12:59.788 "rw_mbytes_per_sec": 0, 00:12:59.788 "r_mbytes_per_sec": 0, 00:12:59.788 "w_mbytes_per_sec": 0 00:12:59.788 }, 00:12:59.788 "claimed": false, 00:12:59.788 "zoned": false, 00:12:59.788 "supported_io_types": { 00:12:59.788 "read": true, 00:12:59.788 "write": true, 00:12:59.788 "unmap": false, 00:12:59.788 "flush": false, 00:12:59.788 "reset": true, 00:12:59.788 "nvme_admin": false, 00:12:59.788 "nvme_io": false, 00:12:59.788 "nvme_io_md": false, 00:12:59.788 "write_zeroes": true, 00:12:59.788 "zcopy": false, 00:12:59.788 "get_zone_info": false, 00:12:59.788 "zone_management": false, 00:12:59.788 "zone_append": false, 00:12:59.788 "compare": false, 00:12:59.788 "compare_and_write": false, 00:12:59.788 "abort": false, 00:12:59.788 "seek_hole": false, 00:12:59.788 "seek_data": false, 00:12:59.788 "copy": false, 00:12:59.788 "nvme_iov_md": false 00:12:59.788 }, 00:12:59.788 "driver_specific": { 00:12:59.788 "raid": { 00:12:59.788 "uuid": "09bfe42f-1934-4f81-b81d-65218853f5a0", 00:12:59.788 "strip_size_kb": 64, 00:12:59.788 "state": "online", 00:12:59.788 "raid_level": "raid5f", 00:12:59.788 "superblock": true, 00:12:59.788 "num_base_bdevs": 3, 00:12:59.788 "num_base_bdevs_discovered": 3, 00:12:59.788 "num_base_bdevs_operational": 3, 00:12:59.788 "base_bdevs_list": [ 00:12:59.788 { 00:12:59.788 "name": "pt1", 00:12:59.788 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:59.788 "is_configured": true, 00:12:59.788 "data_offset": 2048, 00:12:59.788 "data_size": 63488 00:12:59.788 }, 00:12:59.788 { 00:12:59.788 "name": "pt2", 00:12:59.788 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:59.788 "is_configured": true, 00:12:59.788 "data_offset": 2048, 00:12:59.788 "data_size": 63488 00:12:59.788 }, 00:12:59.788 { 00:12:59.788 "name": "pt3", 00:12:59.788 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:59.788 "is_configured": true, 00:12:59.788 "data_offset": 2048, 00:12:59.788 "data_size": 63488 00:12:59.788 } 00:12:59.788 ] 00:12:59.788 } 00:12:59.788 } 00:12:59.788 }' 00:12:59.788 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:59.788 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:59.788 pt2 00:12:59.788 pt3' 00:12:59.788 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.788 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:59.788 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:59.788 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:59.788 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.788 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.788 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:59.788 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.049 21:45:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.049 [2024-11-27 21:45:23.049489] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 09bfe42f-1934-4f81-b81d-65218853f5a0 '!=' 09bfe42f-1934-4f81-b81d-65218853f5a0 ']' 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.049 [2024-11-27 21:45:23.077328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.049 "name": "raid_bdev1", 00:13:00.049 "uuid": "09bfe42f-1934-4f81-b81d-65218853f5a0", 00:13:00.049 "strip_size_kb": 64, 00:13:00.049 "state": "online", 00:13:00.049 "raid_level": "raid5f", 00:13:00.049 "superblock": true, 00:13:00.049 "num_base_bdevs": 3, 00:13:00.049 "num_base_bdevs_discovered": 2, 00:13:00.049 "num_base_bdevs_operational": 2, 00:13:00.049 "base_bdevs_list": [ 00:13:00.049 { 00:13:00.049 "name": null, 00:13:00.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.049 "is_configured": false, 00:13:00.049 "data_offset": 0, 00:13:00.049 "data_size": 63488 00:13:00.049 }, 00:13:00.049 { 00:13:00.049 "name": "pt2", 00:13:00.049 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.049 "is_configured": true, 00:13:00.049 "data_offset": 2048, 00:13:00.049 "data_size": 63488 00:13:00.049 }, 00:13:00.049 { 00:13:00.049 "name": "pt3", 00:13:00.049 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.049 "is_configured": true, 00:13:00.049 "data_offset": 2048, 00:13:00.049 "data_size": 63488 00:13:00.049 } 00:13:00.049 ] 00:13:00.049 }' 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.049 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.618 [2024-11-27 21:45:23.504577] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:00.618 [2024-11-27 21:45:23.504652] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:00.618 [2024-11-27 21:45:23.504744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.618 [2024-11-27 21:45:23.504841] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.618 [2024-11-27 21:45:23.504902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.618 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.618 [2024-11-27 21:45:23.572448] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:00.618 [2024-11-27 21:45:23.572492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.618 [2024-11-27 21:45:23.572511] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:13:00.618 [2024-11-27 21:45:23.572520] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.618 [2024-11-27 21:45:23.574619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.618 [2024-11-27 21:45:23.574687] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:00.618 [2024-11-27 21:45:23.574779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:00.618 [2024-11-27 21:45:23.574829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:00.619 pt2 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.619 "name": "raid_bdev1", 00:13:00.619 "uuid": "09bfe42f-1934-4f81-b81d-65218853f5a0", 00:13:00.619 "strip_size_kb": 64, 00:13:00.619 "state": "configuring", 00:13:00.619 "raid_level": "raid5f", 00:13:00.619 "superblock": true, 00:13:00.619 "num_base_bdevs": 3, 00:13:00.619 "num_base_bdevs_discovered": 1, 00:13:00.619 "num_base_bdevs_operational": 2, 00:13:00.619 "base_bdevs_list": [ 00:13:00.619 { 00:13:00.619 "name": null, 00:13:00.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.619 "is_configured": false, 00:13:00.619 "data_offset": 2048, 00:13:00.619 "data_size": 63488 00:13:00.619 }, 00:13:00.619 { 00:13:00.619 "name": "pt2", 00:13:00.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:00.619 "is_configured": true, 00:13:00.619 "data_offset": 2048, 00:13:00.619 "data_size": 63488 00:13:00.619 }, 00:13:00.619 { 00:13:00.619 "name": null, 00:13:00.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:00.619 "is_configured": false, 00:13:00.619 "data_offset": 2048, 00:13:00.619 "data_size": 63488 00:13:00.619 } 00:13:00.619 ] 00:13:00.619 }' 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.619 21:45:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 [2024-11-27 21:45:24.051682] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:01.188 [2024-11-27 21:45:24.051776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.188 [2024-11-27 21:45:24.051835] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:01.188 [2024-11-27 21:45:24.051868] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.188 [2024-11-27 21:45:24.052311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.188 [2024-11-27 21:45:24.052366] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:01.188 [2024-11-27 21:45:24.052477] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:01.188 [2024-11-27 21:45:24.052528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:01.188 [2024-11-27 21:45:24.052659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:01.188 [2024-11-27 21:45:24.052697] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:01.188 [2024-11-27 21:45:24.052987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:01.188 [2024-11-27 21:45:24.053519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:01.188 [2024-11-27 21:45:24.053572] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:13:01.188 [2024-11-27 21:45:24.053866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.188 pt3 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.188 "name": "raid_bdev1", 00:13:01.188 "uuid": "09bfe42f-1934-4f81-b81d-65218853f5a0", 00:13:01.188 "strip_size_kb": 64, 00:13:01.188 "state": "online", 00:13:01.188 "raid_level": "raid5f", 00:13:01.188 "superblock": true, 00:13:01.188 "num_base_bdevs": 3, 00:13:01.188 "num_base_bdevs_discovered": 2, 00:13:01.188 "num_base_bdevs_operational": 2, 00:13:01.188 "base_bdevs_list": [ 00:13:01.188 { 00:13:01.188 "name": null, 00:13:01.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.188 "is_configured": false, 00:13:01.188 "data_offset": 2048, 00:13:01.188 "data_size": 63488 00:13:01.188 }, 00:13:01.188 { 00:13:01.188 "name": "pt2", 00:13:01.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.188 "is_configured": true, 00:13:01.188 "data_offset": 2048, 00:13:01.188 "data_size": 63488 00:13:01.188 }, 00:13:01.188 { 00:13:01.188 "name": "pt3", 00:13:01.188 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.188 "is_configured": true, 00:13:01.188 "data_offset": 2048, 00:13:01.188 "data_size": 63488 00:13:01.188 } 00:13:01.188 ] 00:13:01.188 }' 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.188 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.448 [2024-11-27 21:45:24.498917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:01.448 [2024-11-27 21:45:24.498985] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.448 [2024-11-27 21:45:24.499053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.448 [2024-11-27 21:45:24.499116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.448 [2024-11-27 21:45:24.499129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.448 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.448 [2024-11-27 21:45:24.554901] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:01.448 [2024-11-27 21:45:24.554992] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.448 [2024-11-27 21:45:24.555026] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:01.448 [2024-11-27 21:45:24.555085] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.448 [2024-11-27 21:45:24.557277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.449 [2024-11-27 21:45:24.557344] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:01.449 [2024-11-27 21:45:24.557426] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:01.449 [2024-11-27 21:45:24.557493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:01.449 [2024-11-27 21:45:24.557647] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:01.449 [2024-11-27 21:45:24.557714] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:01.449 [2024-11-27 21:45:24.557755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:13:01.449 [2024-11-27 21:45:24.557868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:01.449 pt1 00:13:01.449 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.449 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:13:01.449 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:13:01.449 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.449 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:01.449 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.449 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.449 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.449 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.449 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.449 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.449 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.449 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.449 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.708 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.708 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.708 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.708 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.708 "name": "raid_bdev1", 00:13:01.708 "uuid": "09bfe42f-1934-4f81-b81d-65218853f5a0", 00:13:01.708 "strip_size_kb": 64, 00:13:01.708 "state": "configuring", 00:13:01.708 "raid_level": "raid5f", 00:13:01.708 "superblock": true, 00:13:01.708 "num_base_bdevs": 3, 00:13:01.708 "num_base_bdevs_discovered": 1, 00:13:01.708 "num_base_bdevs_operational": 2, 00:13:01.708 "base_bdevs_list": [ 00:13:01.708 { 00:13:01.708 "name": null, 00:13:01.708 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.708 "is_configured": false, 00:13:01.708 "data_offset": 2048, 00:13:01.708 "data_size": 63488 00:13:01.708 }, 00:13:01.708 { 00:13:01.708 "name": "pt2", 00:13:01.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.708 "is_configured": true, 00:13:01.708 "data_offset": 2048, 00:13:01.708 "data_size": 63488 00:13:01.708 }, 00:13:01.708 { 00:13:01.708 "name": null, 00:13:01.708 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.708 "is_configured": false, 00:13:01.708 "data_offset": 2048, 00:13:01.708 "data_size": 63488 00:13:01.708 } 00:13:01.708 ] 00:13:01.708 }' 00:13:01.708 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.708 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.967 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:01.968 21:45:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:01.968 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.968 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.968 21:45:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.968 [2024-11-27 21:45:25.018083] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:01.968 [2024-11-27 21:45:25.018179] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.968 [2024-11-27 21:45:25.018214] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:01.968 [2024-11-27 21:45:25.018246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.968 [2024-11-27 21:45:25.018651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.968 [2024-11-27 21:45:25.018711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:01.968 [2024-11-27 21:45:25.018819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:01.968 [2024-11-27 21:45:25.018875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:01.968 [2024-11-27 21:45:25.019000] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:13:01.968 [2024-11-27 21:45:25.019056] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:01.968 [2024-11-27 21:45:25.019322] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:01.968 [2024-11-27 21:45:25.019831] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:13:01.968 [2024-11-27 21:45:25.019848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:13:01.968 [2024-11-27 21:45:25.020005] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.968 pt3 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.968 "name": "raid_bdev1", 00:13:01.968 "uuid": "09bfe42f-1934-4f81-b81d-65218853f5a0", 00:13:01.968 "strip_size_kb": 64, 00:13:01.968 "state": "online", 00:13:01.968 "raid_level": "raid5f", 00:13:01.968 "superblock": true, 00:13:01.968 "num_base_bdevs": 3, 00:13:01.968 "num_base_bdevs_discovered": 2, 00:13:01.968 "num_base_bdevs_operational": 2, 00:13:01.968 "base_bdevs_list": [ 00:13:01.968 { 00:13:01.968 "name": null, 00:13:01.968 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.968 "is_configured": false, 00:13:01.968 "data_offset": 2048, 00:13:01.968 "data_size": 63488 00:13:01.968 }, 00:13:01.968 { 00:13:01.968 "name": "pt2", 00:13:01.968 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:01.968 "is_configured": true, 00:13:01.968 "data_offset": 2048, 00:13:01.968 "data_size": 63488 00:13:01.968 }, 00:13:01.968 { 00:13:01.968 "name": "pt3", 00:13:01.968 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:01.968 "is_configured": true, 00:13:01.968 "data_offset": 2048, 00:13:01.968 "data_size": 63488 00:13:01.968 } 00:13:01.968 ] 00:13:01.968 }' 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.968 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:02.538 [2024-11-27 21:45:25.461571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 09bfe42f-1934-4f81-b81d-65218853f5a0 '!=' 09bfe42f-1934-4f81-b81d-65218853f5a0 ']' 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 91337 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 91337 ']' 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 91337 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91337 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91337' 00:13:02.538 killing process with pid 91337 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 91337 00:13:02.538 [2024-11-27 21:45:25.546187] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:02.538 [2024-11-27 21:45:25.546320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:02.538 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 91337 00:13:02.538 [2024-11-27 21:45:25.546419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:02.538 [2024-11-27 21:45:25.546431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:13:02.538 [2024-11-27 21:45:25.579085] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:02.799 21:45:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:02.799 00:13:02.799 real 0m6.322s 00:13:02.799 user 0m10.617s 00:13:02.799 sys 0m1.299s 00:13:02.799 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.799 ************************************ 00:13:02.799 END TEST raid5f_superblock_test 00:13:02.799 ************************************ 00:13:02.799 21:45:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.799 21:45:25 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:13:02.799 21:45:25 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:13:02.799 21:45:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:02.799 21:45:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.799 21:45:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:02.799 ************************************ 00:13:02.799 START TEST raid5f_rebuild_test 00:13:02.799 ************************************ 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=91764 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 91764 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 91764 ']' 00:13:02.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.799 21:45:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.060 [2024-11-27 21:45:25.977164] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:03.060 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:03.060 Zero copy mechanism will not be used. 00:13:03.060 [2024-11-27 21:45:25.977387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91764 ] 00:13:03.060 [2024-11-27 21:45:26.132924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.060 [2024-11-27 21:45:26.158966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.320 [2024-11-27 21:45:26.202535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.320 [2024-11-27 21:45:26.202563] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.891 BaseBdev1_malloc 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.891 [2024-11-27 21:45:26.810478] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:03.891 [2024-11-27 21:45:26.810535] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.891 [2024-11-27 21:45:26.810563] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:03.891 [2024-11-27 21:45:26.810574] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.891 [2024-11-27 21:45:26.812725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.891 [2024-11-27 21:45:26.812760] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:03.891 BaseBdev1 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.891 BaseBdev2_malloc 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.891 [2024-11-27 21:45:26.838677] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:03.891 [2024-11-27 21:45:26.838775] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.891 [2024-11-27 21:45:26.838825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:03.891 [2024-11-27 21:45:26.838853] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.891 [2024-11-27 21:45:26.840934] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.891 [2024-11-27 21:45:26.841003] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:03.891 BaseBdev2 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.891 BaseBdev3_malloc 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.891 [2024-11-27 21:45:26.867182] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:03.891 [2024-11-27 21:45:26.867228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.891 [2024-11-27 21:45:26.867265] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:03.891 [2024-11-27 21:45:26.867274] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.891 [2024-11-27 21:45:26.869324] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.891 [2024-11-27 21:45:26.869357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:03.891 BaseBdev3 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.891 spare_malloc 00:13:03.891 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.892 spare_delay 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.892 [2024-11-27 21:45:26.924785] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:03.892 [2024-11-27 21:45:26.924854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.892 [2024-11-27 21:45:26.924885] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:03.892 [2024-11-27 21:45:26.924896] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.892 [2024-11-27 21:45:26.927462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.892 [2024-11-27 21:45:26.927503] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:03.892 spare 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.892 [2024-11-27 21:45:26.936825] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.892 [2024-11-27 21:45:26.938642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.892 [2024-11-27 21:45:26.938700] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.892 [2024-11-27 21:45:26.938774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:03.892 [2024-11-27 21:45:26.938784] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:13:03.892 [2024-11-27 21:45:26.939045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:03.892 [2024-11-27 21:45:26.939441] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:03.892 [2024-11-27 21:45:26.939467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:03.892 [2024-11-27 21:45:26.939578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.892 "name": "raid_bdev1", 00:13:03.892 "uuid": "0cb33c9e-1368-4bae-80ad-b6ad37df9637", 00:13:03.892 "strip_size_kb": 64, 00:13:03.892 "state": "online", 00:13:03.892 "raid_level": "raid5f", 00:13:03.892 "superblock": false, 00:13:03.892 "num_base_bdevs": 3, 00:13:03.892 "num_base_bdevs_discovered": 3, 00:13:03.892 "num_base_bdevs_operational": 3, 00:13:03.892 "base_bdevs_list": [ 00:13:03.892 { 00:13:03.892 "name": "BaseBdev1", 00:13:03.892 "uuid": "8630a830-e4d9-58b2-bdc6-35150d3badff", 00:13:03.892 "is_configured": true, 00:13:03.892 "data_offset": 0, 00:13:03.892 "data_size": 65536 00:13:03.892 }, 00:13:03.892 { 00:13:03.892 "name": "BaseBdev2", 00:13:03.892 "uuid": "455ea2e6-ffe6-5392-90e6-5266fb6a04da", 00:13:03.892 "is_configured": true, 00:13:03.892 "data_offset": 0, 00:13:03.892 "data_size": 65536 00:13:03.892 }, 00:13:03.892 { 00:13:03.892 "name": "BaseBdev3", 00:13:03.892 "uuid": "def4bec3-73f0-55ad-9cc3-39f33fad32da", 00:13:03.892 "is_configured": true, 00:13:03.892 "data_offset": 0, 00:13:03.892 "data_size": 65536 00:13:03.892 } 00:13:03.892 ] 00:13:03.892 }' 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.892 21:45:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.462 [2024-11-27 21:45:27.328558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:04.462 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:04.463 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:04.463 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:04.463 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:04.463 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:04.463 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:04.722 [2024-11-27 21:45:27.596071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:04.723 /dev/nbd0 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.723 1+0 records in 00:13:04.723 1+0 records out 00:13:04.723 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338988 s, 12.1 MB/s 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:04.723 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:13:04.982 512+0 records in 00:13:04.982 512+0 records out 00:13:04.982 67108864 bytes (67 MB, 64 MiB) copied, 0.280852 s, 239 MB/s 00:13:04.982 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:04.982 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:04.982 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:04.983 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:04.983 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:04.983 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:04.983 21:45:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:05.242 [2024-11-27 21:45:28.131276] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.242 [2024-11-27 21:45:28.163288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:05.242 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.243 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.243 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.243 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.243 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.243 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.243 21:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.243 21:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.243 21:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.243 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.243 "name": "raid_bdev1", 00:13:05.243 "uuid": "0cb33c9e-1368-4bae-80ad-b6ad37df9637", 00:13:05.243 "strip_size_kb": 64, 00:13:05.243 "state": "online", 00:13:05.243 "raid_level": "raid5f", 00:13:05.243 "superblock": false, 00:13:05.243 "num_base_bdevs": 3, 00:13:05.243 "num_base_bdevs_discovered": 2, 00:13:05.243 "num_base_bdevs_operational": 2, 00:13:05.243 "base_bdevs_list": [ 00:13:05.243 { 00:13:05.243 "name": null, 00:13:05.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.243 "is_configured": false, 00:13:05.243 "data_offset": 0, 00:13:05.243 "data_size": 65536 00:13:05.243 }, 00:13:05.243 { 00:13:05.243 "name": "BaseBdev2", 00:13:05.243 "uuid": "455ea2e6-ffe6-5392-90e6-5266fb6a04da", 00:13:05.243 "is_configured": true, 00:13:05.243 "data_offset": 0, 00:13:05.243 "data_size": 65536 00:13:05.243 }, 00:13:05.243 { 00:13:05.243 "name": "BaseBdev3", 00:13:05.243 "uuid": "def4bec3-73f0-55ad-9cc3-39f33fad32da", 00:13:05.243 "is_configured": true, 00:13:05.243 "data_offset": 0, 00:13:05.243 "data_size": 65536 00:13:05.243 } 00:13:05.243 ] 00:13:05.243 }' 00:13:05.243 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.243 21:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.812 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:05.812 21:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.812 21:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.812 [2024-11-27 21:45:28.646499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:05.812 [2024-11-27 21:45:28.651119] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027cd0 00:13:05.812 21:45:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.812 21:45:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:05.812 [2024-11-27 21:45:28.653353] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.776 "name": "raid_bdev1", 00:13:06.776 "uuid": "0cb33c9e-1368-4bae-80ad-b6ad37df9637", 00:13:06.776 "strip_size_kb": 64, 00:13:06.776 "state": "online", 00:13:06.776 "raid_level": "raid5f", 00:13:06.776 "superblock": false, 00:13:06.776 "num_base_bdevs": 3, 00:13:06.776 "num_base_bdevs_discovered": 3, 00:13:06.776 "num_base_bdevs_operational": 3, 00:13:06.776 "process": { 00:13:06.776 "type": "rebuild", 00:13:06.776 "target": "spare", 00:13:06.776 "progress": { 00:13:06.776 "blocks": 20480, 00:13:06.776 "percent": 15 00:13:06.776 } 00:13:06.776 }, 00:13:06.776 "base_bdevs_list": [ 00:13:06.776 { 00:13:06.776 "name": "spare", 00:13:06.776 "uuid": "c77e8cde-f8b0-56eb-9fcf-7311bcb4082f", 00:13:06.776 "is_configured": true, 00:13:06.776 "data_offset": 0, 00:13:06.776 "data_size": 65536 00:13:06.776 }, 00:13:06.776 { 00:13:06.776 "name": "BaseBdev2", 00:13:06.776 "uuid": "455ea2e6-ffe6-5392-90e6-5266fb6a04da", 00:13:06.776 "is_configured": true, 00:13:06.776 "data_offset": 0, 00:13:06.776 "data_size": 65536 00:13:06.776 }, 00:13:06.776 { 00:13:06.776 "name": "BaseBdev3", 00:13:06.776 "uuid": "def4bec3-73f0-55ad-9cc3-39f33fad32da", 00:13:06.776 "is_configured": true, 00:13:06.776 "data_offset": 0, 00:13:06.776 "data_size": 65536 00:13:06.776 } 00:13:06.776 ] 00:13:06.776 }' 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.776 [2024-11-27 21:45:29.809545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.776 [2024-11-27 21:45:29.860543] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:06.776 [2024-11-27 21:45:29.860611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.776 [2024-11-27 21:45:29.860628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:06.776 [2024-11-27 21:45:29.860637] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.776 21:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.036 21:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.036 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:07.036 "name": "raid_bdev1", 00:13:07.036 "uuid": "0cb33c9e-1368-4bae-80ad-b6ad37df9637", 00:13:07.036 "strip_size_kb": 64, 00:13:07.036 "state": "online", 00:13:07.036 "raid_level": "raid5f", 00:13:07.036 "superblock": false, 00:13:07.036 "num_base_bdevs": 3, 00:13:07.036 "num_base_bdevs_discovered": 2, 00:13:07.036 "num_base_bdevs_operational": 2, 00:13:07.036 "base_bdevs_list": [ 00:13:07.036 { 00:13:07.036 "name": null, 00:13:07.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.036 "is_configured": false, 00:13:07.036 "data_offset": 0, 00:13:07.036 "data_size": 65536 00:13:07.036 }, 00:13:07.036 { 00:13:07.036 "name": "BaseBdev2", 00:13:07.036 "uuid": "455ea2e6-ffe6-5392-90e6-5266fb6a04da", 00:13:07.036 "is_configured": true, 00:13:07.036 "data_offset": 0, 00:13:07.036 "data_size": 65536 00:13:07.036 }, 00:13:07.036 { 00:13:07.036 "name": "BaseBdev3", 00:13:07.036 "uuid": "def4bec3-73f0-55ad-9cc3-39f33fad32da", 00:13:07.036 "is_configured": true, 00:13:07.036 "data_offset": 0, 00:13:07.036 "data_size": 65536 00:13:07.036 } 00:13:07.036 ] 00:13:07.036 }' 00:13:07.036 21:45:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:07.036 21:45:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.296 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:07.297 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.297 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:07.297 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:07.297 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.297 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.297 21:45:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.297 21:45:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.297 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.297 21:45:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.297 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.297 "name": "raid_bdev1", 00:13:07.297 "uuid": "0cb33c9e-1368-4bae-80ad-b6ad37df9637", 00:13:07.297 "strip_size_kb": 64, 00:13:07.297 "state": "online", 00:13:07.297 "raid_level": "raid5f", 00:13:07.297 "superblock": false, 00:13:07.297 "num_base_bdevs": 3, 00:13:07.297 "num_base_bdevs_discovered": 2, 00:13:07.297 "num_base_bdevs_operational": 2, 00:13:07.297 "base_bdevs_list": [ 00:13:07.297 { 00:13:07.297 "name": null, 00:13:07.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.297 "is_configured": false, 00:13:07.297 "data_offset": 0, 00:13:07.297 "data_size": 65536 00:13:07.297 }, 00:13:07.297 { 00:13:07.297 "name": "BaseBdev2", 00:13:07.297 "uuid": "455ea2e6-ffe6-5392-90e6-5266fb6a04da", 00:13:07.297 "is_configured": true, 00:13:07.297 "data_offset": 0, 00:13:07.297 "data_size": 65536 00:13:07.297 }, 00:13:07.297 { 00:13:07.297 "name": "BaseBdev3", 00:13:07.297 "uuid": "def4bec3-73f0-55ad-9cc3-39f33fad32da", 00:13:07.297 "is_configured": true, 00:13:07.297 "data_offset": 0, 00:13:07.297 "data_size": 65536 00:13:07.297 } 00:13:07.297 ] 00:13:07.297 }' 00:13:07.297 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.297 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:07.297 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.557 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:07.557 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:07.557 21:45:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.557 21:45:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.557 [2024-11-27 21:45:30.445427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:07.557 [2024-11-27 21:45:30.449718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:13:07.557 21:45:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.557 21:45:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:07.557 [2024-11-27 21:45:30.451768] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.498 "name": "raid_bdev1", 00:13:08.498 "uuid": "0cb33c9e-1368-4bae-80ad-b6ad37df9637", 00:13:08.498 "strip_size_kb": 64, 00:13:08.498 "state": "online", 00:13:08.498 "raid_level": "raid5f", 00:13:08.498 "superblock": false, 00:13:08.498 "num_base_bdevs": 3, 00:13:08.498 "num_base_bdevs_discovered": 3, 00:13:08.498 "num_base_bdevs_operational": 3, 00:13:08.498 "process": { 00:13:08.498 "type": "rebuild", 00:13:08.498 "target": "spare", 00:13:08.498 "progress": { 00:13:08.498 "blocks": 20480, 00:13:08.498 "percent": 15 00:13:08.498 } 00:13:08.498 }, 00:13:08.498 "base_bdevs_list": [ 00:13:08.498 { 00:13:08.498 "name": "spare", 00:13:08.498 "uuid": "c77e8cde-f8b0-56eb-9fcf-7311bcb4082f", 00:13:08.498 "is_configured": true, 00:13:08.498 "data_offset": 0, 00:13:08.498 "data_size": 65536 00:13:08.498 }, 00:13:08.498 { 00:13:08.498 "name": "BaseBdev2", 00:13:08.498 "uuid": "455ea2e6-ffe6-5392-90e6-5266fb6a04da", 00:13:08.498 "is_configured": true, 00:13:08.498 "data_offset": 0, 00:13:08.498 "data_size": 65536 00:13:08.498 }, 00:13:08.498 { 00:13:08.498 "name": "BaseBdev3", 00:13:08.498 "uuid": "def4bec3-73f0-55ad-9cc3-39f33fad32da", 00:13:08.498 "is_configured": true, 00:13:08.498 "data_offset": 0, 00:13:08.498 "data_size": 65536 00:13:08.498 } 00:13:08.498 ] 00:13:08.498 }' 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=440 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.498 "name": "raid_bdev1", 00:13:08.498 "uuid": "0cb33c9e-1368-4bae-80ad-b6ad37df9637", 00:13:08.498 "strip_size_kb": 64, 00:13:08.498 "state": "online", 00:13:08.498 "raid_level": "raid5f", 00:13:08.498 "superblock": false, 00:13:08.498 "num_base_bdevs": 3, 00:13:08.498 "num_base_bdevs_discovered": 3, 00:13:08.498 "num_base_bdevs_operational": 3, 00:13:08.498 "process": { 00:13:08.498 "type": "rebuild", 00:13:08.498 "target": "spare", 00:13:08.498 "progress": { 00:13:08.498 "blocks": 22528, 00:13:08.498 "percent": 17 00:13:08.498 } 00:13:08.498 }, 00:13:08.498 "base_bdevs_list": [ 00:13:08.498 { 00:13:08.498 "name": "spare", 00:13:08.498 "uuid": "c77e8cde-f8b0-56eb-9fcf-7311bcb4082f", 00:13:08.498 "is_configured": true, 00:13:08.498 "data_offset": 0, 00:13:08.498 "data_size": 65536 00:13:08.498 }, 00:13:08.498 { 00:13:08.498 "name": "BaseBdev2", 00:13:08.498 "uuid": "455ea2e6-ffe6-5392-90e6-5266fb6a04da", 00:13:08.498 "is_configured": true, 00:13:08.498 "data_offset": 0, 00:13:08.498 "data_size": 65536 00:13:08.498 }, 00:13:08.498 { 00:13:08.498 "name": "BaseBdev3", 00:13:08.498 "uuid": "def4bec3-73f0-55ad-9cc3-39f33fad32da", 00:13:08.498 "is_configured": true, 00:13:08.498 "data_offset": 0, 00:13:08.498 "data_size": 65536 00:13:08.498 } 00:13:08.498 ] 00:13:08.498 }' 00:13:08.498 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.758 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.758 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.758 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.758 21:45:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:09.697 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:09.697 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.697 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.697 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.697 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.697 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.697 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.697 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.697 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.697 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.697 21:45:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.697 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.697 "name": "raid_bdev1", 00:13:09.697 "uuid": "0cb33c9e-1368-4bae-80ad-b6ad37df9637", 00:13:09.697 "strip_size_kb": 64, 00:13:09.697 "state": "online", 00:13:09.697 "raid_level": "raid5f", 00:13:09.697 "superblock": false, 00:13:09.697 "num_base_bdevs": 3, 00:13:09.697 "num_base_bdevs_discovered": 3, 00:13:09.697 "num_base_bdevs_operational": 3, 00:13:09.697 "process": { 00:13:09.698 "type": "rebuild", 00:13:09.698 "target": "spare", 00:13:09.698 "progress": { 00:13:09.698 "blocks": 45056, 00:13:09.698 "percent": 34 00:13:09.698 } 00:13:09.698 }, 00:13:09.698 "base_bdevs_list": [ 00:13:09.698 { 00:13:09.698 "name": "spare", 00:13:09.698 "uuid": "c77e8cde-f8b0-56eb-9fcf-7311bcb4082f", 00:13:09.698 "is_configured": true, 00:13:09.698 "data_offset": 0, 00:13:09.698 "data_size": 65536 00:13:09.698 }, 00:13:09.698 { 00:13:09.698 "name": "BaseBdev2", 00:13:09.698 "uuid": "455ea2e6-ffe6-5392-90e6-5266fb6a04da", 00:13:09.698 "is_configured": true, 00:13:09.698 "data_offset": 0, 00:13:09.698 "data_size": 65536 00:13:09.698 }, 00:13:09.698 { 00:13:09.698 "name": "BaseBdev3", 00:13:09.698 "uuid": "def4bec3-73f0-55ad-9cc3-39f33fad32da", 00:13:09.698 "is_configured": true, 00:13:09.698 "data_offset": 0, 00:13:09.698 "data_size": 65536 00:13:09.698 } 00:13:09.698 ] 00:13:09.698 }' 00:13:09.698 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.698 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.698 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.698 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.698 21:45:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.079 "name": "raid_bdev1", 00:13:11.079 "uuid": "0cb33c9e-1368-4bae-80ad-b6ad37df9637", 00:13:11.079 "strip_size_kb": 64, 00:13:11.079 "state": "online", 00:13:11.079 "raid_level": "raid5f", 00:13:11.079 "superblock": false, 00:13:11.079 "num_base_bdevs": 3, 00:13:11.079 "num_base_bdevs_discovered": 3, 00:13:11.079 "num_base_bdevs_operational": 3, 00:13:11.079 "process": { 00:13:11.079 "type": "rebuild", 00:13:11.079 "target": "spare", 00:13:11.079 "progress": { 00:13:11.079 "blocks": 67584, 00:13:11.079 "percent": 51 00:13:11.079 } 00:13:11.079 }, 00:13:11.079 "base_bdevs_list": [ 00:13:11.079 { 00:13:11.079 "name": "spare", 00:13:11.079 "uuid": "c77e8cde-f8b0-56eb-9fcf-7311bcb4082f", 00:13:11.079 "is_configured": true, 00:13:11.079 "data_offset": 0, 00:13:11.079 "data_size": 65536 00:13:11.079 }, 00:13:11.079 { 00:13:11.079 "name": "BaseBdev2", 00:13:11.079 "uuid": "455ea2e6-ffe6-5392-90e6-5266fb6a04da", 00:13:11.079 "is_configured": true, 00:13:11.079 "data_offset": 0, 00:13:11.079 "data_size": 65536 00:13:11.079 }, 00:13:11.079 { 00:13:11.079 "name": "BaseBdev3", 00:13:11.079 "uuid": "def4bec3-73f0-55ad-9cc3-39f33fad32da", 00:13:11.079 "is_configured": true, 00:13:11.079 "data_offset": 0, 00:13:11.079 "data_size": 65536 00:13:11.079 } 00:13:11.079 ] 00:13:11.079 }' 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.079 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.080 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.080 21:45:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:12.019 21:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.019 21:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.019 21:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.019 21:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.019 21:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.019 21:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.019 21:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.019 21:45:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.019 21:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.019 21:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.019 21:45:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.019 21:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.019 "name": "raid_bdev1", 00:13:12.019 "uuid": "0cb33c9e-1368-4bae-80ad-b6ad37df9637", 00:13:12.019 "strip_size_kb": 64, 00:13:12.019 "state": "online", 00:13:12.019 "raid_level": "raid5f", 00:13:12.019 "superblock": false, 00:13:12.019 "num_base_bdevs": 3, 00:13:12.019 "num_base_bdevs_discovered": 3, 00:13:12.019 "num_base_bdevs_operational": 3, 00:13:12.019 "process": { 00:13:12.019 "type": "rebuild", 00:13:12.019 "target": "spare", 00:13:12.019 "progress": { 00:13:12.019 "blocks": 92160, 00:13:12.019 "percent": 70 00:13:12.019 } 00:13:12.019 }, 00:13:12.019 "base_bdevs_list": [ 00:13:12.019 { 00:13:12.019 "name": "spare", 00:13:12.019 "uuid": "c77e8cde-f8b0-56eb-9fcf-7311bcb4082f", 00:13:12.019 "is_configured": true, 00:13:12.019 "data_offset": 0, 00:13:12.019 "data_size": 65536 00:13:12.019 }, 00:13:12.019 { 00:13:12.019 "name": "BaseBdev2", 00:13:12.019 "uuid": "455ea2e6-ffe6-5392-90e6-5266fb6a04da", 00:13:12.019 "is_configured": true, 00:13:12.019 "data_offset": 0, 00:13:12.019 "data_size": 65536 00:13:12.019 }, 00:13:12.019 { 00:13:12.019 "name": "BaseBdev3", 00:13:12.019 "uuid": "def4bec3-73f0-55ad-9cc3-39f33fad32da", 00:13:12.019 "is_configured": true, 00:13:12.019 "data_offset": 0, 00:13:12.019 "data_size": 65536 00:13:12.019 } 00:13:12.019 ] 00:13:12.019 }' 00:13:12.019 21:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.019 21:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.019 21:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.019 21:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.019 21:45:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.401 "name": "raid_bdev1", 00:13:13.401 "uuid": "0cb33c9e-1368-4bae-80ad-b6ad37df9637", 00:13:13.401 "strip_size_kb": 64, 00:13:13.401 "state": "online", 00:13:13.401 "raid_level": "raid5f", 00:13:13.401 "superblock": false, 00:13:13.401 "num_base_bdevs": 3, 00:13:13.401 "num_base_bdevs_discovered": 3, 00:13:13.401 "num_base_bdevs_operational": 3, 00:13:13.401 "process": { 00:13:13.401 "type": "rebuild", 00:13:13.401 "target": "spare", 00:13:13.401 "progress": { 00:13:13.401 "blocks": 114688, 00:13:13.401 "percent": 87 00:13:13.401 } 00:13:13.401 }, 00:13:13.401 "base_bdevs_list": [ 00:13:13.401 { 00:13:13.401 "name": "spare", 00:13:13.401 "uuid": "c77e8cde-f8b0-56eb-9fcf-7311bcb4082f", 00:13:13.401 "is_configured": true, 00:13:13.401 "data_offset": 0, 00:13:13.401 "data_size": 65536 00:13:13.401 }, 00:13:13.401 { 00:13:13.401 "name": "BaseBdev2", 00:13:13.401 "uuid": "455ea2e6-ffe6-5392-90e6-5266fb6a04da", 00:13:13.401 "is_configured": true, 00:13:13.401 "data_offset": 0, 00:13:13.401 "data_size": 65536 00:13:13.401 }, 00:13:13.401 { 00:13:13.401 "name": "BaseBdev3", 00:13:13.401 "uuid": "def4bec3-73f0-55ad-9cc3-39f33fad32da", 00:13:13.401 "is_configured": true, 00:13:13.401 "data_offset": 0, 00:13:13.401 "data_size": 65536 00:13:13.401 } 00:13:13.401 ] 00:13:13.401 }' 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.401 21:45:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:13.970 [2024-11-27 21:45:36.886836] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:13.970 [2024-11-27 21:45:36.886955] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:13.970 [2024-11-27 21:45:36.887001] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.230 "name": "raid_bdev1", 00:13:14.230 "uuid": "0cb33c9e-1368-4bae-80ad-b6ad37df9637", 00:13:14.230 "strip_size_kb": 64, 00:13:14.230 "state": "online", 00:13:14.230 "raid_level": "raid5f", 00:13:14.230 "superblock": false, 00:13:14.230 "num_base_bdevs": 3, 00:13:14.230 "num_base_bdevs_discovered": 3, 00:13:14.230 "num_base_bdevs_operational": 3, 00:13:14.230 "base_bdevs_list": [ 00:13:14.230 { 00:13:14.230 "name": "spare", 00:13:14.230 "uuid": "c77e8cde-f8b0-56eb-9fcf-7311bcb4082f", 00:13:14.230 "is_configured": true, 00:13:14.230 "data_offset": 0, 00:13:14.230 "data_size": 65536 00:13:14.230 }, 00:13:14.230 { 00:13:14.230 "name": "BaseBdev2", 00:13:14.230 "uuid": "455ea2e6-ffe6-5392-90e6-5266fb6a04da", 00:13:14.230 "is_configured": true, 00:13:14.230 "data_offset": 0, 00:13:14.230 "data_size": 65536 00:13:14.230 }, 00:13:14.230 { 00:13:14.230 "name": "BaseBdev3", 00:13:14.230 "uuid": "def4bec3-73f0-55ad-9cc3-39f33fad32da", 00:13:14.230 "is_configured": true, 00:13:14.230 "data_offset": 0, 00:13:14.230 "data_size": 65536 00:13:14.230 } 00:13:14.230 ] 00:13:14.230 }' 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:14.230 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.490 "name": "raid_bdev1", 00:13:14.490 "uuid": "0cb33c9e-1368-4bae-80ad-b6ad37df9637", 00:13:14.490 "strip_size_kb": 64, 00:13:14.490 "state": "online", 00:13:14.490 "raid_level": "raid5f", 00:13:14.490 "superblock": false, 00:13:14.490 "num_base_bdevs": 3, 00:13:14.490 "num_base_bdevs_discovered": 3, 00:13:14.490 "num_base_bdevs_operational": 3, 00:13:14.490 "base_bdevs_list": [ 00:13:14.490 { 00:13:14.490 "name": "spare", 00:13:14.490 "uuid": "c77e8cde-f8b0-56eb-9fcf-7311bcb4082f", 00:13:14.490 "is_configured": true, 00:13:14.490 "data_offset": 0, 00:13:14.490 "data_size": 65536 00:13:14.490 }, 00:13:14.490 { 00:13:14.490 "name": "BaseBdev2", 00:13:14.490 "uuid": "455ea2e6-ffe6-5392-90e6-5266fb6a04da", 00:13:14.490 "is_configured": true, 00:13:14.490 "data_offset": 0, 00:13:14.490 "data_size": 65536 00:13:14.490 }, 00:13:14.490 { 00:13:14.490 "name": "BaseBdev3", 00:13:14.490 "uuid": "def4bec3-73f0-55ad-9cc3-39f33fad32da", 00:13:14.490 "is_configured": true, 00:13:14.490 "data_offset": 0, 00:13:14.490 "data_size": 65536 00:13:14.490 } 00:13:14.490 ] 00:13:14.490 }' 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.490 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:14.490 "name": "raid_bdev1", 00:13:14.490 "uuid": "0cb33c9e-1368-4bae-80ad-b6ad37df9637", 00:13:14.490 "strip_size_kb": 64, 00:13:14.490 "state": "online", 00:13:14.490 "raid_level": "raid5f", 00:13:14.490 "superblock": false, 00:13:14.490 "num_base_bdevs": 3, 00:13:14.490 "num_base_bdevs_discovered": 3, 00:13:14.490 "num_base_bdevs_operational": 3, 00:13:14.490 "base_bdevs_list": [ 00:13:14.490 { 00:13:14.490 "name": "spare", 00:13:14.490 "uuid": "c77e8cde-f8b0-56eb-9fcf-7311bcb4082f", 00:13:14.490 "is_configured": true, 00:13:14.490 "data_offset": 0, 00:13:14.490 "data_size": 65536 00:13:14.490 }, 00:13:14.490 { 00:13:14.490 "name": "BaseBdev2", 00:13:14.491 "uuid": "455ea2e6-ffe6-5392-90e6-5266fb6a04da", 00:13:14.491 "is_configured": true, 00:13:14.491 "data_offset": 0, 00:13:14.491 "data_size": 65536 00:13:14.491 }, 00:13:14.491 { 00:13:14.491 "name": "BaseBdev3", 00:13:14.491 "uuid": "def4bec3-73f0-55ad-9cc3-39f33fad32da", 00:13:14.491 "is_configured": true, 00:13:14.491 "data_offset": 0, 00:13:14.491 "data_size": 65536 00:13:14.491 } 00:13:14.491 ] 00:13:14.491 }' 00:13:14.491 21:45:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:14.491 21:45:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.060 [2024-11-27 21:45:38.038390] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:15.060 [2024-11-27 21:45:38.038462] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.060 [2024-11-27 21:45:38.038562] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.060 [2024-11-27 21:45:38.038719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.060 [2024-11-27 21:45:38.038770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.060 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:15.320 /dev/nbd0 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.320 1+0 records in 00:13:15.320 1+0 records out 00:13:15.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336735 s, 12.2 MB/s 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.320 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:15.580 /dev/nbd1 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:15.580 1+0 records in 00:13:15.580 1+0 records out 00:13:15.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390045 s, 10.5 MB/s 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.580 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:15.839 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:15.839 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:15.839 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:15.839 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:15.839 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:15.839 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:15.839 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:15.839 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:15.839 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:15.839 21:45:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 91764 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 91764 ']' 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 91764 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91764 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91764' 00:13:16.100 killing process with pid 91764 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 91764 00:13:16.100 Received shutdown signal, test time was about 60.000000 seconds 00:13:16.100 00:13:16.100 Latency(us) 00:13:16.100 [2024-11-27T21:45:39.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.100 [2024-11-27T21:45:39.221Z] =================================================================================================================== 00:13:16.100 [2024-11-27T21:45:39.221Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:16.100 [2024-11-27 21:45:39.112533] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:16.100 21:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 91764 00:13:16.100 [2024-11-27 21:45:39.151011] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:16.359 00:13:16.359 real 0m13.465s 00:13:16.359 user 0m16.843s 00:13:16.359 sys 0m1.843s 00:13:16.359 ************************************ 00:13:16.359 END TEST raid5f_rebuild_test 00:13:16.359 ************************************ 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.359 21:45:39 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:13:16.359 21:45:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:16.359 21:45:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.359 21:45:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:16.359 ************************************ 00:13:16.359 START TEST raid5f_rebuild_test_sb 00:13:16.359 ************************************ 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=92187 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 92187 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:16.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 92187 ']' 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:16.359 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.360 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:16.360 21:45:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.618 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:16.618 Zero copy mechanism will not be used. 00:13:16.618 [2024-11-27 21:45:39.505462] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:16.618 [2024-11-27 21:45:39.505578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92187 ] 00:13:16.618 [2024-11-27 21:45:39.637620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.618 [2024-11-27 21:45:39.661834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.618 [2024-11-27 21:45:39.703586] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.618 [2024-11-27 21:45:39.703627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 BaseBdev1_malloc 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 [2024-11-27 21:45:40.354261] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:17.556 [2024-11-27 21:45:40.354374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.556 [2024-11-27 21:45:40.354419] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:17.556 [2024-11-27 21:45:40.354449] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.556 [2024-11-27 21:45:40.356591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.556 [2024-11-27 21:45:40.356659] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:17.556 BaseBdev1 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 BaseBdev2_malloc 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 [2024-11-27 21:45:40.382574] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:17.556 [2024-11-27 21:45:40.382675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.556 [2024-11-27 21:45:40.382715] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:17.556 [2024-11-27 21:45:40.382744] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.556 [2024-11-27 21:45:40.384816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.556 [2024-11-27 21:45:40.384884] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:17.556 BaseBdev2 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 BaseBdev3_malloc 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 [2024-11-27 21:45:40.411046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:17.556 [2024-11-27 21:45:40.411146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.556 [2024-11-27 21:45:40.411185] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:17.556 [2024-11-27 21:45:40.411214] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.556 [2024-11-27 21:45:40.413296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.556 [2024-11-27 21:45:40.413361] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:17.556 BaseBdev3 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 spare_malloc 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 spare_delay 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.556 [2024-11-27 21:45:40.469096] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:17.556 [2024-11-27 21:45:40.469221] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.556 [2024-11-27 21:45:40.469284] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:17.556 [2024-11-27 21:45:40.469331] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.556 [2024-11-27 21:45:40.471997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.556 [2024-11-27 21:45:40.472075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:17.556 spare 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.556 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.557 [2024-11-27 21:45:40.481119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:17.557 [2024-11-27 21:45:40.482934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:17.557 [2024-11-27 21:45:40.482989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:17.557 [2024-11-27 21:45:40.483147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:17.557 [2024-11-27 21:45:40.483159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:17.557 [2024-11-27 21:45:40.483396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:17.557 [2024-11-27 21:45:40.483755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:17.557 [2024-11-27 21:45:40.483765] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:17.557 [2024-11-27 21:45:40.483900] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.557 "name": "raid_bdev1", 00:13:17.557 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:17.557 "strip_size_kb": 64, 00:13:17.557 "state": "online", 00:13:17.557 "raid_level": "raid5f", 00:13:17.557 "superblock": true, 00:13:17.557 "num_base_bdevs": 3, 00:13:17.557 "num_base_bdevs_discovered": 3, 00:13:17.557 "num_base_bdevs_operational": 3, 00:13:17.557 "base_bdevs_list": [ 00:13:17.557 { 00:13:17.557 "name": "BaseBdev1", 00:13:17.557 "uuid": "c34ff0e5-1048-557b-a30a-19f6e13be353", 00:13:17.557 "is_configured": true, 00:13:17.557 "data_offset": 2048, 00:13:17.557 "data_size": 63488 00:13:17.557 }, 00:13:17.557 { 00:13:17.557 "name": "BaseBdev2", 00:13:17.557 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:17.557 "is_configured": true, 00:13:17.557 "data_offset": 2048, 00:13:17.557 "data_size": 63488 00:13:17.557 }, 00:13:17.557 { 00:13:17.557 "name": "BaseBdev3", 00:13:17.557 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:17.557 "is_configured": true, 00:13:17.557 "data_offset": 2048, 00:13:17.557 "data_size": 63488 00:13:17.557 } 00:13:17.557 ] 00:13:17.557 }' 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.557 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.816 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:17.816 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:17.816 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.816 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.816 [2024-11-27 21:45:40.932714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.076 21:45:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:18.076 [2024-11-27 21:45:41.168208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:13:18.076 /dev/nbd0 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.336 1+0 records in 00:13:18.336 1+0 records out 00:13:18.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031602 s, 13.0 MB/s 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:13:18.336 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:13:18.596 496+0 records in 00:13:18.596 496+0 records out 00:13:18.596 65011712 bytes (65 MB, 62 MiB) copied, 0.276543 s, 235 MB/s 00:13:18.596 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:18.596 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.596 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:18.596 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.596 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:18.596 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.596 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:18.856 [2024-11-27 21:45:41.748848] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.856 [2024-11-27 21:45:41.764905] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.856 "name": "raid_bdev1", 00:13:18.856 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:18.856 "strip_size_kb": 64, 00:13:18.856 "state": "online", 00:13:18.856 "raid_level": "raid5f", 00:13:18.856 "superblock": true, 00:13:18.856 "num_base_bdevs": 3, 00:13:18.856 "num_base_bdevs_discovered": 2, 00:13:18.856 "num_base_bdevs_operational": 2, 00:13:18.856 "base_bdevs_list": [ 00:13:18.856 { 00:13:18.856 "name": null, 00:13:18.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.856 "is_configured": false, 00:13:18.856 "data_offset": 0, 00:13:18.856 "data_size": 63488 00:13:18.856 }, 00:13:18.856 { 00:13:18.856 "name": "BaseBdev2", 00:13:18.856 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:18.856 "is_configured": true, 00:13:18.856 "data_offset": 2048, 00:13:18.856 "data_size": 63488 00:13:18.856 }, 00:13:18.856 { 00:13:18.856 "name": "BaseBdev3", 00:13:18.856 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:18.856 "is_configured": true, 00:13:18.856 "data_offset": 2048, 00:13:18.856 "data_size": 63488 00:13:18.856 } 00:13:18.856 ] 00:13:18.856 }' 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.856 21:45:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.425 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:19.426 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.426 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.426 [2024-11-27 21:45:42.256125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:19.426 [2024-11-27 21:45:42.260618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000255d0 00:13:19.426 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.426 21:45:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:19.426 [2024-11-27 21:45:42.262850] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.363 "name": "raid_bdev1", 00:13:20.363 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:20.363 "strip_size_kb": 64, 00:13:20.363 "state": "online", 00:13:20.363 "raid_level": "raid5f", 00:13:20.363 "superblock": true, 00:13:20.363 "num_base_bdevs": 3, 00:13:20.363 "num_base_bdevs_discovered": 3, 00:13:20.363 "num_base_bdevs_operational": 3, 00:13:20.363 "process": { 00:13:20.363 "type": "rebuild", 00:13:20.363 "target": "spare", 00:13:20.363 "progress": { 00:13:20.363 "blocks": 20480, 00:13:20.363 "percent": 16 00:13:20.363 } 00:13:20.363 }, 00:13:20.363 "base_bdevs_list": [ 00:13:20.363 { 00:13:20.363 "name": "spare", 00:13:20.363 "uuid": "842b97ab-003c-51c4-9a02-677a2cacdc14", 00:13:20.363 "is_configured": true, 00:13:20.363 "data_offset": 2048, 00:13:20.363 "data_size": 63488 00:13:20.363 }, 00:13:20.363 { 00:13:20.363 "name": "BaseBdev2", 00:13:20.363 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:20.363 "is_configured": true, 00:13:20.363 "data_offset": 2048, 00:13:20.363 "data_size": 63488 00:13:20.363 }, 00:13:20.363 { 00:13:20.363 "name": "BaseBdev3", 00:13:20.363 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:20.363 "is_configured": true, 00:13:20.363 "data_offset": 2048, 00:13:20.363 "data_size": 63488 00:13:20.363 } 00:13:20.363 ] 00:13:20.363 }' 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.363 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.363 [2024-11-27 21:45:43.399210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.363 [2024-11-27 21:45:43.469948] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:20.363 [2024-11-27 21:45:43.470006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.363 [2024-11-27 21:45:43.470021] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:20.363 [2024-11-27 21:45:43.470029] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.621 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.621 "name": "raid_bdev1", 00:13:20.621 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:20.621 "strip_size_kb": 64, 00:13:20.621 "state": "online", 00:13:20.621 "raid_level": "raid5f", 00:13:20.621 "superblock": true, 00:13:20.621 "num_base_bdevs": 3, 00:13:20.622 "num_base_bdevs_discovered": 2, 00:13:20.622 "num_base_bdevs_operational": 2, 00:13:20.622 "base_bdevs_list": [ 00:13:20.622 { 00:13:20.622 "name": null, 00:13:20.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.622 "is_configured": false, 00:13:20.622 "data_offset": 0, 00:13:20.622 "data_size": 63488 00:13:20.622 }, 00:13:20.622 { 00:13:20.622 "name": "BaseBdev2", 00:13:20.622 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:20.622 "is_configured": true, 00:13:20.622 "data_offset": 2048, 00:13:20.622 "data_size": 63488 00:13:20.622 }, 00:13:20.622 { 00:13:20.622 "name": "BaseBdev3", 00:13:20.622 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:20.622 "is_configured": true, 00:13:20.622 "data_offset": 2048, 00:13:20.622 "data_size": 63488 00:13:20.622 } 00:13:20.622 ] 00:13:20.622 }' 00:13:20.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.622 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.881 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:20.881 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.881 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:20.881 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:20.881 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.881 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.881 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.881 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.881 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.881 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.881 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.881 "name": "raid_bdev1", 00:13:20.881 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:20.881 "strip_size_kb": 64, 00:13:20.881 "state": "online", 00:13:20.881 "raid_level": "raid5f", 00:13:20.881 "superblock": true, 00:13:20.881 "num_base_bdevs": 3, 00:13:20.881 "num_base_bdevs_discovered": 2, 00:13:20.881 "num_base_bdevs_operational": 2, 00:13:20.881 "base_bdevs_list": [ 00:13:20.881 { 00:13:20.881 "name": null, 00:13:20.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.881 "is_configured": false, 00:13:20.881 "data_offset": 0, 00:13:20.881 "data_size": 63488 00:13:20.881 }, 00:13:20.881 { 00:13:20.881 "name": "BaseBdev2", 00:13:20.881 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:20.881 "is_configured": true, 00:13:20.881 "data_offset": 2048, 00:13:20.881 "data_size": 63488 00:13:20.881 }, 00:13:20.881 { 00:13:20.881 "name": "BaseBdev3", 00:13:20.881 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:20.881 "is_configured": true, 00:13:20.881 "data_offset": 2048, 00:13:20.881 "data_size": 63488 00:13:20.881 } 00:13:20.881 ] 00:13:20.881 }' 00:13:20.881 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.881 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:20.881 21:45:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.141 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:21.141 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:21.141 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.141 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.141 [2024-11-27 21:45:44.026809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:21.141 [2024-11-27 21:45:44.031210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000256a0 00:13:21.141 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.141 21:45:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:21.141 [2024-11-27 21:45:44.033385] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:22.077 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.077 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.077 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.077 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.077 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.077 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.077 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.077 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.077 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.077 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.077 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.077 "name": "raid_bdev1", 00:13:22.077 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:22.077 "strip_size_kb": 64, 00:13:22.077 "state": "online", 00:13:22.077 "raid_level": "raid5f", 00:13:22.077 "superblock": true, 00:13:22.077 "num_base_bdevs": 3, 00:13:22.077 "num_base_bdevs_discovered": 3, 00:13:22.077 "num_base_bdevs_operational": 3, 00:13:22.077 "process": { 00:13:22.077 "type": "rebuild", 00:13:22.077 "target": "spare", 00:13:22.077 "progress": { 00:13:22.077 "blocks": 20480, 00:13:22.077 "percent": 16 00:13:22.077 } 00:13:22.078 }, 00:13:22.078 "base_bdevs_list": [ 00:13:22.078 { 00:13:22.078 "name": "spare", 00:13:22.078 "uuid": "842b97ab-003c-51c4-9a02-677a2cacdc14", 00:13:22.078 "is_configured": true, 00:13:22.078 "data_offset": 2048, 00:13:22.078 "data_size": 63488 00:13:22.078 }, 00:13:22.078 { 00:13:22.078 "name": "BaseBdev2", 00:13:22.078 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:22.078 "is_configured": true, 00:13:22.078 "data_offset": 2048, 00:13:22.078 "data_size": 63488 00:13:22.078 }, 00:13:22.078 { 00:13:22.078 "name": "BaseBdev3", 00:13:22.078 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:22.078 "is_configured": true, 00:13:22.078 "data_offset": 2048, 00:13:22.078 "data_size": 63488 00:13:22.078 } 00:13:22.078 ] 00:13:22.078 }' 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:22.078 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=454 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.078 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.337 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.337 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.337 "name": "raid_bdev1", 00:13:22.337 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:22.337 "strip_size_kb": 64, 00:13:22.337 "state": "online", 00:13:22.337 "raid_level": "raid5f", 00:13:22.337 "superblock": true, 00:13:22.337 "num_base_bdevs": 3, 00:13:22.337 "num_base_bdevs_discovered": 3, 00:13:22.337 "num_base_bdevs_operational": 3, 00:13:22.337 "process": { 00:13:22.337 "type": "rebuild", 00:13:22.337 "target": "spare", 00:13:22.337 "progress": { 00:13:22.337 "blocks": 22528, 00:13:22.337 "percent": 17 00:13:22.337 } 00:13:22.337 }, 00:13:22.337 "base_bdevs_list": [ 00:13:22.337 { 00:13:22.337 "name": "spare", 00:13:22.337 "uuid": "842b97ab-003c-51c4-9a02-677a2cacdc14", 00:13:22.337 "is_configured": true, 00:13:22.337 "data_offset": 2048, 00:13:22.337 "data_size": 63488 00:13:22.337 }, 00:13:22.337 { 00:13:22.337 "name": "BaseBdev2", 00:13:22.337 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:22.337 "is_configured": true, 00:13:22.337 "data_offset": 2048, 00:13:22.337 "data_size": 63488 00:13:22.337 }, 00:13:22.337 { 00:13:22.337 "name": "BaseBdev3", 00:13:22.337 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:22.337 "is_configured": true, 00:13:22.337 "data_offset": 2048, 00:13:22.337 "data_size": 63488 00:13:22.337 } 00:13:22.337 ] 00:13:22.337 }' 00:13:22.337 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.337 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.337 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.337 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.337 21:45:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:23.276 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.276 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.276 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.276 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.276 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.276 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.276 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.276 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.276 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.276 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.276 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.276 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.276 "name": "raid_bdev1", 00:13:23.276 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:23.276 "strip_size_kb": 64, 00:13:23.276 "state": "online", 00:13:23.276 "raid_level": "raid5f", 00:13:23.276 "superblock": true, 00:13:23.276 "num_base_bdevs": 3, 00:13:23.276 "num_base_bdevs_discovered": 3, 00:13:23.276 "num_base_bdevs_operational": 3, 00:13:23.276 "process": { 00:13:23.276 "type": "rebuild", 00:13:23.276 "target": "spare", 00:13:23.276 "progress": { 00:13:23.276 "blocks": 45056, 00:13:23.276 "percent": 35 00:13:23.276 } 00:13:23.276 }, 00:13:23.276 "base_bdevs_list": [ 00:13:23.276 { 00:13:23.276 "name": "spare", 00:13:23.276 "uuid": "842b97ab-003c-51c4-9a02-677a2cacdc14", 00:13:23.276 "is_configured": true, 00:13:23.276 "data_offset": 2048, 00:13:23.276 "data_size": 63488 00:13:23.276 }, 00:13:23.276 { 00:13:23.276 "name": "BaseBdev2", 00:13:23.276 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:23.276 "is_configured": true, 00:13:23.276 "data_offset": 2048, 00:13:23.276 "data_size": 63488 00:13:23.276 }, 00:13:23.276 { 00:13:23.276 "name": "BaseBdev3", 00:13:23.276 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:23.276 "is_configured": true, 00:13:23.276 "data_offset": 2048, 00:13:23.276 "data_size": 63488 00:13:23.276 } 00:13:23.276 ] 00:13:23.276 }' 00:13:23.276 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.535 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:23.536 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.536 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:23.536 21:45:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.474 "name": "raid_bdev1", 00:13:24.474 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:24.474 "strip_size_kb": 64, 00:13:24.474 "state": "online", 00:13:24.474 "raid_level": "raid5f", 00:13:24.474 "superblock": true, 00:13:24.474 "num_base_bdevs": 3, 00:13:24.474 "num_base_bdevs_discovered": 3, 00:13:24.474 "num_base_bdevs_operational": 3, 00:13:24.474 "process": { 00:13:24.474 "type": "rebuild", 00:13:24.474 "target": "spare", 00:13:24.474 "progress": { 00:13:24.474 "blocks": 69632, 00:13:24.474 "percent": 54 00:13:24.474 } 00:13:24.474 }, 00:13:24.474 "base_bdevs_list": [ 00:13:24.474 { 00:13:24.474 "name": "spare", 00:13:24.474 "uuid": "842b97ab-003c-51c4-9a02-677a2cacdc14", 00:13:24.474 "is_configured": true, 00:13:24.474 "data_offset": 2048, 00:13:24.474 "data_size": 63488 00:13:24.474 }, 00:13:24.474 { 00:13:24.474 "name": "BaseBdev2", 00:13:24.474 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:24.474 "is_configured": true, 00:13:24.474 "data_offset": 2048, 00:13:24.474 "data_size": 63488 00:13:24.474 }, 00:13:24.474 { 00:13:24.474 "name": "BaseBdev3", 00:13:24.474 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:24.474 "is_configured": true, 00:13:24.474 "data_offset": 2048, 00:13:24.474 "data_size": 63488 00:13:24.474 } 00:13:24.474 ] 00:13:24.474 }' 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.474 21:45:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.854 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:25.854 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:25.854 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.854 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:25.854 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:25.854 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.854 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.854 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.854 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.855 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.855 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.855 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.855 "name": "raid_bdev1", 00:13:25.855 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:25.855 "strip_size_kb": 64, 00:13:25.855 "state": "online", 00:13:25.855 "raid_level": "raid5f", 00:13:25.855 "superblock": true, 00:13:25.855 "num_base_bdevs": 3, 00:13:25.855 "num_base_bdevs_discovered": 3, 00:13:25.855 "num_base_bdevs_operational": 3, 00:13:25.855 "process": { 00:13:25.855 "type": "rebuild", 00:13:25.855 "target": "spare", 00:13:25.855 "progress": { 00:13:25.855 "blocks": 92160, 00:13:25.855 "percent": 72 00:13:25.855 } 00:13:25.855 }, 00:13:25.855 "base_bdevs_list": [ 00:13:25.855 { 00:13:25.855 "name": "spare", 00:13:25.855 "uuid": "842b97ab-003c-51c4-9a02-677a2cacdc14", 00:13:25.855 "is_configured": true, 00:13:25.855 "data_offset": 2048, 00:13:25.855 "data_size": 63488 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "name": "BaseBdev2", 00:13:25.855 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:25.855 "is_configured": true, 00:13:25.855 "data_offset": 2048, 00:13:25.855 "data_size": 63488 00:13:25.855 }, 00:13:25.855 { 00:13:25.855 "name": "BaseBdev3", 00:13:25.855 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:25.855 "is_configured": true, 00:13:25.855 "data_offset": 2048, 00:13:25.855 "data_size": 63488 00:13:25.855 } 00:13:25.855 ] 00:13:25.855 }' 00:13:25.855 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:25.855 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:25.855 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:25.855 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:25.855 21:45:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.820 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.820 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.820 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.820 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.820 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.820 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.820 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.820 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.820 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.820 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.820 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.820 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.820 "name": "raid_bdev1", 00:13:26.820 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:26.820 "strip_size_kb": 64, 00:13:26.820 "state": "online", 00:13:26.820 "raid_level": "raid5f", 00:13:26.820 "superblock": true, 00:13:26.820 "num_base_bdevs": 3, 00:13:26.820 "num_base_bdevs_discovered": 3, 00:13:26.820 "num_base_bdevs_operational": 3, 00:13:26.820 "process": { 00:13:26.820 "type": "rebuild", 00:13:26.820 "target": "spare", 00:13:26.820 "progress": { 00:13:26.820 "blocks": 114688, 00:13:26.820 "percent": 90 00:13:26.820 } 00:13:26.820 }, 00:13:26.820 "base_bdevs_list": [ 00:13:26.820 { 00:13:26.820 "name": "spare", 00:13:26.820 "uuid": "842b97ab-003c-51c4-9a02-677a2cacdc14", 00:13:26.820 "is_configured": true, 00:13:26.820 "data_offset": 2048, 00:13:26.820 "data_size": 63488 00:13:26.820 }, 00:13:26.820 { 00:13:26.820 "name": "BaseBdev2", 00:13:26.820 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:26.820 "is_configured": true, 00:13:26.820 "data_offset": 2048, 00:13:26.820 "data_size": 63488 00:13:26.820 }, 00:13:26.820 { 00:13:26.820 "name": "BaseBdev3", 00:13:26.820 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:26.820 "is_configured": true, 00:13:26.820 "data_offset": 2048, 00:13:26.820 "data_size": 63488 00:13:26.820 } 00:13:26.820 ] 00:13:26.820 }' 00:13:26.820 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.821 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.821 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.821 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.821 21:45:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.481 [2024-11-27 21:45:50.267063] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:27.481 [2024-11-27 21:45:50.267196] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:27.481 [2024-11-27 21:45:50.267340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.741 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.741 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.741 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.741 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.741 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.741 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.741 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.741 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.741 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.741 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.001 "name": "raid_bdev1", 00:13:28.001 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:28.001 "strip_size_kb": 64, 00:13:28.001 "state": "online", 00:13:28.001 "raid_level": "raid5f", 00:13:28.001 "superblock": true, 00:13:28.001 "num_base_bdevs": 3, 00:13:28.001 "num_base_bdevs_discovered": 3, 00:13:28.001 "num_base_bdevs_operational": 3, 00:13:28.001 "base_bdevs_list": [ 00:13:28.001 { 00:13:28.001 "name": "spare", 00:13:28.001 "uuid": "842b97ab-003c-51c4-9a02-677a2cacdc14", 00:13:28.001 "is_configured": true, 00:13:28.001 "data_offset": 2048, 00:13:28.001 "data_size": 63488 00:13:28.001 }, 00:13:28.001 { 00:13:28.001 "name": "BaseBdev2", 00:13:28.001 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:28.001 "is_configured": true, 00:13:28.001 "data_offset": 2048, 00:13:28.001 "data_size": 63488 00:13:28.001 }, 00:13:28.001 { 00:13:28.001 "name": "BaseBdev3", 00:13:28.001 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:28.001 "is_configured": true, 00:13:28.001 "data_offset": 2048, 00:13:28.001 "data_size": 63488 00:13:28.001 } 00:13:28.001 ] 00:13:28.001 }' 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.001 21:45:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.001 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.001 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.001 "name": "raid_bdev1", 00:13:28.001 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:28.001 "strip_size_kb": 64, 00:13:28.001 "state": "online", 00:13:28.001 "raid_level": "raid5f", 00:13:28.001 "superblock": true, 00:13:28.001 "num_base_bdevs": 3, 00:13:28.001 "num_base_bdevs_discovered": 3, 00:13:28.001 "num_base_bdevs_operational": 3, 00:13:28.001 "base_bdevs_list": [ 00:13:28.001 { 00:13:28.001 "name": "spare", 00:13:28.001 "uuid": "842b97ab-003c-51c4-9a02-677a2cacdc14", 00:13:28.001 "is_configured": true, 00:13:28.001 "data_offset": 2048, 00:13:28.001 "data_size": 63488 00:13:28.001 }, 00:13:28.001 { 00:13:28.001 "name": "BaseBdev2", 00:13:28.001 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:28.001 "is_configured": true, 00:13:28.001 "data_offset": 2048, 00:13:28.001 "data_size": 63488 00:13:28.001 }, 00:13:28.001 { 00:13:28.001 "name": "BaseBdev3", 00:13:28.001 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:28.001 "is_configured": true, 00:13:28.001 "data_offset": 2048, 00:13:28.001 "data_size": 63488 00:13:28.001 } 00:13:28.001 ] 00:13:28.001 }' 00:13:28.001 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.001 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.001 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.261 "name": "raid_bdev1", 00:13:28.261 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:28.261 "strip_size_kb": 64, 00:13:28.261 "state": "online", 00:13:28.261 "raid_level": "raid5f", 00:13:28.261 "superblock": true, 00:13:28.261 "num_base_bdevs": 3, 00:13:28.261 "num_base_bdevs_discovered": 3, 00:13:28.261 "num_base_bdevs_operational": 3, 00:13:28.261 "base_bdevs_list": [ 00:13:28.261 { 00:13:28.261 "name": "spare", 00:13:28.261 "uuid": "842b97ab-003c-51c4-9a02-677a2cacdc14", 00:13:28.261 "is_configured": true, 00:13:28.261 "data_offset": 2048, 00:13:28.261 "data_size": 63488 00:13:28.261 }, 00:13:28.261 { 00:13:28.261 "name": "BaseBdev2", 00:13:28.261 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:28.261 "is_configured": true, 00:13:28.261 "data_offset": 2048, 00:13:28.261 "data_size": 63488 00:13:28.261 }, 00:13:28.261 { 00:13:28.261 "name": "BaseBdev3", 00:13:28.261 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:28.261 "is_configured": true, 00:13:28.261 "data_offset": 2048, 00:13:28.261 "data_size": 63488 00:13:28.261 } 00:13:28.261 ] 00:13:28.261 }' 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.261 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.519 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:28.519 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.519 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.519 [2024-11-27 21:45:51.606438] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:28.519 [2024-11-27 21:45:51.606471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:28.519 [2024-11-27 21:45:51.606556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:28.519 [2024-11-27 21:45:51.606638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:28.519 [2024-11-27 21:45:51.606650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:28.520 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.520 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.520 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.520 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.520 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:28.520 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:28.778 /dev/nbd0 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.778 1+0 records in 00:13:28.778 1+0 records out 00:13:28.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529153 s, 7.7 MB/s 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:28.778 21:45:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:29.038 /dev/nbd1 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.038 1+0 records in 00:13:29.038 1+0 records out 00:13:29.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045161 s, 9.1 MB/s 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:29.038 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:29.298 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:29.298 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:29.298 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:29.298 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.298 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:29.298 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.298 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:29.298 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:29.298 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:29.298 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:29.298 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.298 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.298 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.557 [2024-11-27 21:45:52.636179] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:29.557 [2024-11-27 21:45:52.636273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.557 [2024-11-27 21:45:52.636298] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:29.557 [2024-11-27 21:45:52.636307] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.557 [2024-11-27 21:45:52.638446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.557 [2024-11-27 21:45:52.638481] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:29.557 [2024-11-27 21:45:52.638555] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:29.557 [2024-11-27 21:45:52.638592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:29.557 [2024-11-27 21:45:52.638722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.557 [2024-11-27 21:45:52.638824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.557 spare 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.557 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.817 [2024-11-27 21:45:52.738714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:13:29.817 [2024-11-27 21:45:52.738742] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:13:29.817 [2024-11-27 21:45:52.739051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043d50 00:13:29.817 [2024-11-27 21:45:52.739504] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:13:29.817 [2024-11-27 21:45:52.739527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:13:29.817 [2024-11-27 21:45:52.739670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.817 "name": "raid_bdev1", 00:13:29.817 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:29.817 "strip_size_kb": 64, 00:13:29.817 "state": "online", 00:13:29.817 "raid_level": "raid5f", 00:13:29.817 "superblock": true, 00:13:29.817 "num_base_bdevs": 3, 00:13:29.817 "num_base_bdevs_discovered": 3, 00:13:29.817 "num_base_bdevs_operational": 3, 00:13:29.817 "base_bdevs_list": [ 00:13:29.817 { 00:13:29.817 "name": "spare", 00:13:29.817 "uuid": "842b97ab-003c-51c4-9a02-677a2cacdc14", 00:13:29.817 "is_configured": true, 00:13:29.817 "data_offset": 2048, 00:13:29.817 "data_size": 63488 00:13:29.817 }, 00:13:29.817 { 00:13:29.817 "name": "BaseBdev2", 00:13:29.817 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:29.817 "is_configured": true, 00:13:29.817 "data_offset": 2048, 00:13:29.817 "data_size": 63488 00:13:29.817 }, 00:13:29.817 { 00:13:29.817 "name": "BaseBdev3", 00:13:29.817 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:29.817 "is_configured": true, 00:13:29.817 "data_offset": 2048, 00:13:29.817 "data_size": 63488 00:13:29.817 } 00:13:29.817 ] 00:13:29.817 }' 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.817 21:45:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.077 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.077 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.077 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.077 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.077 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.077 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.077 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.077 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.077 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.077 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.336 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.336 "name": "raid_bdev1", 00:13:30.336 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:30.336 "strip_size_kb": 64, 00:13:30.336 "state": "online", 00:13:30.336 "raid_level": "raid5f", 00:13:30.336 "superblock": true, 00:13:30.336 "num_base_bdevs": 3, 00:13:30.336 "num_base_bdevs_discovered": 3, 00:13:30.336 "num_base_bdevs_operational": 3, 00:13:30.336 "base_bdevs_list": [ 00:13:30.336 { 00:13:30.336 "name": "spare", 00:13:30.337 "uuid": "842b97ab-003c-51c4-9a02-677a2cacdc14", 00:13:30.337 "is_configured": true, 00:13:30.337 "data_offset": 2048, 00:13:30.337 "data_size": 63488 00:13:30.337 }, 00:13:30.337 { 00:13:30.337 "name": "BaseBdev2", 00:13:30.337 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:30.337 "is_configured": true, 00:13:30.337 "data_offset": 2048, 00:13:30.337 "data_size": 63488 00:13:30.337 }, 00:13:30.337 { 00:13:30.337 "name": "BaseBdev3", 00:13:30.337 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:30.337 "is_configured": true, 00:13:30.337 "data_offset": 2048, 00:13:30.337 "data_size": 63488 00:13:30.337 } 00:13:30.337 ] 00:13:30.337 }' 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 [2024-11-27 21:45:53.335881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:30.337 "name": "raid_bdev1", 00:13:30.337 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:30.337 "strip_size_kb": 64, 00:13:30.337 "state": "online", 00:13:30.337 "raid_level": "raid5f", 00:13:30.337 "superblock": true, 00:13:30.337 "num_base_bdevs": 3, 00:13:30.337 "num_base_bdevs_discovered": 2, 00:13:30.337 "num_base_bdevs_operational": 2, 00:13:30.337 "base_bdevs_list": [ 00:13:30.337 { 00:13:30.337 "name": null, 00:13:30.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.337 "is_configured": false, 00:13:30.337 "data_offset": 0, 00:13:30.337 "data_size": 63488 00:13:30.337 }, 00:13:30.337 { 00:13:30.337 "name": "BaseBdev2", 00:13:30.337 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:30.337 "is_configured": true, 00:13:30.337 "data_offset": 2048, 00:13:30.337 "data_size": 63488 00:13:30.337 }, 00:13:30.337 { 00:13:30.337 "name": "BaseBdev3", 00:13:30.337 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:30.337 "is_configured": true, 00:13:30.337 "data_offset": 2048, 00:13:30.337 "data_size": 63488 00:13:30.337 } 00:13:30.337 ] 00:13:30.337 }' 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:30.337 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.906 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:30.906 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.906 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.906 [2024-11-27 21:45:53.739189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:30.906 [2024-11-27 21:45:53.739435] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:30.906 [2024-11-27 21:45:53.739497] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:30.906 [2024-11-27 21:45:53.739571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:30.906 [2024-11-27 21:45:53.743875] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043e20 00:13:30.906 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.906 21:45:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:30.906 [2024-11-27 21:45:53.746105] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:31.845 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.845 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.845 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.845 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.845 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.845 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.845 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.845 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.845 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.846 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.846 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.846 "name": "raid_bdev1", 00:13:31.846 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:31.846 "strip_size_kb": 64, 00:13:31.846 "state": "online", 00:13:31.846 "raid_level": "raid5f", 00:13:31.846 "superblock": true, 00:13:31.846 "num_base_bdevs": 3, 00:13:31.846 "num_base_bdevs_discovered": 3, 00:13:31.846 "num_base_bdevs_operational": 3, 00:13:31.846 "process": { 00:13:31.846 "type": "rebuild", 00:13:31.846 "target": "spare", 00:13:31.846 "progress": { 00:13:31.846 "blocks": 20480, 00:13:31.846 "percent": 16 00:13:31.846 } 00:13:31.846 }, 00:13:31.846 "base_bdevs_list": [ 00:13:31.846 { 00:13:31.846 "name": "spare", 00:13:31.846 "uuid": "842b97ab-003c-51c4-9a02-677a2cacdc14", 00:13:31.846 "is_configured": true, 00:13:31.846 "data_offset": 2048, 00:13:31.846 "data_size": 63488 00:13:31.846 }, 00:13:31.846 { 00:13:31.846 "name": "BaseBdev2", 00:13:31.846 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:31.846 "is_configured": true, 00:13:31.846 "data_offset": 2048, 00:13:31.846 "data_size": 63488 00:13:31.846 }, 00:13:31.846 { 00:13:31.846 "name": "BaseBdev3", 00:13:31.846 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:31.846 "is_configured": true, 00:13:31.846 "data_offset": 2048, 00:13:31.846 "data_size": 63488 00:13:31.846 } 00:13:31.846 ] 00:13:31.846 }' 00:13:31.846 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.846 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.846 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.846 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.846 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:31.846 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.846 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.846 [2024-11-27 21:45:54.882269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:31.846 [2024-11-27 21:45:54.953277] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:31.846 [2024-11-27 21:45:54.953388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.846 [2024-11-27 21:45:54.953409] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:31.846 [2024-11-27 21:45:54.953417] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.106 21:45:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.106 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.106 "name": "raid_bdev1", 00:13:32.106 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:32.106 "strip_size_kb": 64, 00:13:32.106 "state": "online", 00:13:32.106 "raid_level": "raid5f", 00:13:32.106 "superblock": true, 00:13:32.106 "num_base_bdevs": 3, 00:13:32.106 "num_base_bdevs_discovered": 2, 00:13:32.106 "num_base_bdevs_operational": 2, 00:13:32.106 "base_bdevs_list": [ 00:13:32.106 { 00:13:32.106 "name": null, 00:13:32.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.106 "is_configured": false, 00:13:32.106 "data_offset": 0, 00:13:32.106 "data_size": 63488 00:13:32.106 }, 00:13:32.106 { 00:13:32.106 "name": "BaseBdev2", 00:13:32.106 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:32.106 "is_configured": true, 00:13:32.106 "data_offset": 2048, 00:13:32.106 "data_size": 63488 00:13:32.106 }, 00:13:32.106 { 00:13:32.106 "name": "BaseBdev3", 00:13:32.106 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:32.106 "is_configured": true, 00:13:32.106 "data_offset": 2048, 00:13:32.106 "data_size": 63488 00:13:32.106 } 00:13:32.106 ] 00:13:32.106 }' 00:13:32.106 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.106 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.367 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:32.367 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.367 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.367 [2024-11-27 21:45:55.402047] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:32.367 [2024-11-27 21:45:55.402157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:32.367 [2024-11-27 21:45:55.402194] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:32.367 [2024-11-27 21:45:55.402221] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:32.367 [2024-11-27 21:45:55.402696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:32.367 [2024-11-27 21:45:55.402752] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:32.367 [2024-11-27 21:45:55.402882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:32.367 [2024-11-27 21:45:55.402923] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:32.367 [2024-11-27 21:45:55.402984] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:32.367 [2024-11-27 21:45:55.403045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.367 [2024-11-27 21:45:55.407340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000043ef0 00:13:32.367 spare 00:13:32.367 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.367 21:45:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:32.367 [2024-11-27 21:45:55.409571] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.307 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.307 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.307 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.307 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.307 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.307 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.307 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.307 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.307 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.567 "name": "raid_bdev1", 00:13:33.567 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:33.567 "strip_size_kb": 64, 00:13:33.567 "state": "online", 00:13:33.567 "raid_level": "raid5f", 00:13:33.567 "superblock": true, 00:13:33.567 "num_base_bdevs": 3, 00:13:33.567 "num_base_bdevs_discovered": 3, 00:13:33.567 "num_base_bdevs_operational": 3, 00:13:33.567 "process": { 00:13:33.567 "type": "rebuild", 00:13:33.567 "target": "spare", 00:13:33.567 "progress": { 00:13:33.567 "blocks": 20480, 00:13:33.567 "percent": 16 00:13:33.567 } 00:13:33.567 }, 00:13:33.567 "base_bdevs_list": [ 00:13:33.567 { 00:13:33.567 "name": "spare", 00:13:33.567 "uuid": "842b97ab-003c-51c4-9a02-677a2cacdc14", 00:13:33.567 "is_configured": true, 00:13:33.567 "data_offset": 2048, 00:13:33.567 "data_size": 63488 00:13:33.567 }, 00:13:33.567 { 00:13:33.567 "name": "BaseBdev2", 00:13:33.567 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:33.567 "is_configured": true, 00:13:33.567 "data_offset": 2048, 00:13:33.567 "data_size": 63488 00:13:33.567 }, 00:13:33.567 { 00:13:33.567 "name": "BaseBdev3", 00:13:33.567 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:33.567 "is_configured": true, 00:13:33.567 "data_offset": 2048, 00:13:33.567 "data_size": 63488 00:13:33.567 } 00:13:33.567 ] 00:13:33.567 }' 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.567 [2024-11-27 21:45:56.565606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.567 [2024-11-27 21:45:56.616403] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:33.567 [2024-11-27 21:45:56.616456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.567 [2024-11-27 21:45:56.616472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.567 [2024-11-27 21:45:56.616483] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.567 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.568 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.568 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.568 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.568 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.568 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.568 "name": "raid_bdev1", 00:13:33.568 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:33.568 "strip_size_kb": 64, 00:13:33.568 "state": "online", 00:13:33.568 "raid_level": "raid5f", 00:13:33.568 "superblock": true, 00:13:33.568 "num_base_bdevs": 3, 00:13:33.568 "num_base_bdevs_discovered": 2, 00:13:33.568 "num_base_bdevs_operational": 2, 00:13:33.568 "base_bdevs_list": [ 00:13:33.568 { 00:13:33.568 "name": null, 00:13:33.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.568 "is_configured": false, 00:13:33.568 "data_offset": 0, 00:13:33.568 "data_size": 63488 00:13:33.568 }, 00:13:33.568 { 00:13:33.568 "name": "BaseBdev2", 00:13:33.568 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:33.568 "is_configured": true, 00:13:33.568 "data_offset": 2048, 00:13:33.568 "data_size": 63488 00:13:33.568 }, 00:13:33.568 { 00:13:33.568 "name": "BaseBdev3", 00:13:33.568 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:33.568 "is_configured": true, 00:13:33.568 "data_offset": 2048, 00:13:33.568 "data_size": 63488 00:13:33.568 } 00:13:33.568 ] 00:13:33.568 }' 00:13:33.568 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.568 21:45:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.137 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:34.138 "name": "raid_bdev1", 00:13:34.138 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:34.138 "strip_size_kb": 64, 00:13:34.138 "state": "online", 00:13:34.138 "raid_level": "raid5f", 00:13:34.138 "superblock": true, 00:13:34.138 "num_base_bdevs": 3, 00:13:34.138 "num_base_bdevs_discovered": 2, 00:13:34.138 "num_base_bdevs_operational": 2, 00:13:34.138 "base_bdevs_list": [ 00:13:34.138 { 00:13:34.138 "name": null, 00:13:34.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.138 "is_configured": false, 00:13:34.138 "data_offset": 0, 00:13:34.138 "data_size": 63488 00:13:34.138 }, 00:13:34.138 { 00:13:34.138 "name": "BaseBdev2", 00:13:34.138 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:34.138 "is_configured": true, 00:13:34.138 "data_offset": 2048, 00:13:34.138 "data_size": 63488 00:13:34.138 }, 00:13:34.138 { 00:13:34.138 "name": "BaseBdev3", 00:13:34.138 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:34.138 "is_configured": true, 00:13:34.138 "data_offset": 2048, 00:13:34.138 "data_size": 63488 00:13:34.138 } 00:13:34.138 ] 00:13:34.138 }' 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.138 [2024-11-27 21:45:57.224747] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:34.138 [2024-11-27 21:45:57.224813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.138 [2024-11-27 21:45:57.224833] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:34.138 [2024-11-27 21:45:57.224843] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.138 [2024-11-27 21:45:57.225257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.138 [2024-11-27 21:45:57.225291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:34.138 [2024-11-27 21:45:57.225360] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:34.138 [2024-11-27 21:45:57.225378] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:34.138 [2024-11-27 21:45:57.225386] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:34.138 [2024-11-27 21:45:57.225398] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:34.138 BaseBdev1 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.138 21:45:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.518 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.518 "name": "raid_bdev1", 00:13:35.518 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:35.518 "strip_size_kb": 64, 00:13:35.518 "state": "online", 00:13:35.518 "raid_level": "raid5f", 00:13:35.518 "superblock": true, 00:13:35.518 "num_base_bdevs": 3, 00:13:35.518 "num_base_bdevs_discovered": 2, 00:13:35.518 "num_base_bdevs_operational": 2, 00:13:35.518 "base_bdevs_list": [ 00:13:35.518 { 00:13:35.518 "name": null, 00:13:35.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.519 "is_configured": false, 00:13:35.519 "data_offset": 0, 00:13:35.519 "data_size": 63488 00:13:35.519 }, 00:13:35.519 { 00:13:35.519 "name": "BaseBdev2", 00:13:35.519 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:35.519 "is_configured": true, 00:13:35.519 "data_offset": 2048, 00:13:35.519 "data_size": 63488 00:13:35.519 }, 00:13:35.519 { 00:13:35.519 "name": "BaseBdev3", 00:13:35.519 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:35.519 "is_configured": true, 00:13:35.519 "data_offset": 2048, 00:13:35.519 "data_size": 63488 00:13:35.519 } 00:13:35.519 ] 00:13:35.519 }' 00:13:35.519 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.519 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.778 "name": "raid_bdev1", 00:13:35.778 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:35.778 "strip_size_kb": 64, 00:13:35.778 "state": "online", 00:13:35.778 "raid_level": "raid5f", 00:13:35.778 "superblock": true, 00:13:35.778 "num_base_bdevs": 3, 00:13:35.778 "num_base_bdevs_discovered": 2, 00:13:35.778 "num_base_bdevs_operational": 2, 00:13:35.778 "base_bdevs_list": [ 00:13:35.778 { 00:13:35.778 "name": null, 00:13:35.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.778 "is_configured": false, 00:13:35.778 "data_offset": 0, 00:13:35.778 "data_size": 63488 00:13:35.778 }, 00:13:35.778 { 00:13:35.778 "name": "BaseBdev2", 00:13:35.778 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:35.778 "is_configured": true, 00:13:35.778 "data_offset": 2048, 00:13:35.778 "data_size": 63488 00:13:35.778 }, 00:13:35.778 { 00:13:35.778 "name": "BaseBdev3", 00:13:35.778 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:35.778 "is_configured": true, 00:13:35.778 "data_offset": 2048, 00:13:35.778 "data_size": 63488 00:13:35.778 } 00:13:35.778 ] 00:13:35.778 }' 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.778 [2024-11-27 21:45:58.810941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.778 [2024-11-27 21:45:58.811099] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:35.778 [2024-11-27 21:45:58.811116] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:35.778 request: 00:13:35.778 { 00:13:35.778 "base_bdev": "BaseBdev1", 00:13:35.778 "raid_bdev": "raid_bdev1", 00:13:35.778 "method": "bdev_raid_add_base_bdev", 00:13:35.778 "req_id": 1 00:13:35.778 } 00:13:35.778 Got JSON-RPC error response 00:13:35.778 response: 00:13:35.778 { 00:13:35.778 "code": -22, 00:13:35.778 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:35.778 } 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:35.778 21:45:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:36.716 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:13:36.716 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.716 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.716 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:36.716 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:36.716 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.716 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.716 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.716 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.716 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.716 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.716 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.716 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.716 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.976 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.976 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.976 "name": "raid_bdev1", 00:13:36.976 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:36.976 "strip_size_kb": 64, 00:13:36.976 "state": "online", 00:13:36.976 "raid_level": "raid5f", 00:13:36.976 "superblock": true, 00:13:36.976 "num_base_bdevs": 3, 00:13:36.976 "num_base_bdevs_discovered": 2, 00:13:36.976 "num_base_bdevs_operational": 2, 00:13:36.976 "base_bdevs_list": [ 00:13:36.976 { 00:13:36.976 "name": null, 00:13:36.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.976 "is_configured": false, 00:13:36.976 "data_offset": 0, 00:13:36.976 "data_size": 63488 00:13:36.976 }, 00:13:36.976 { 00:13:36.976 "name": "BaseBdev2", 00:13:36.976 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:36.976 "is_configured": true, 00:13:36.976 "data_offset": 2048, 00:13:36.976 "data_size": 63488 00:13:36.976 }, 00:13:36.976 { 00:13:36.976 "name": "BaseBdev3", 00:13:36.976 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:36.976 "is_configured": true, 00:13:36.976 "data_offset": 2048, 00:13:36.976 "data_size": 63488 00:13:36.976 } 00:13:36.976 ] 00:13:36.976 }' 00:13:36.976 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.976 21:45:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.237 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.237 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.237 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.237 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.237 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.237 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.237 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.237 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.237 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.237 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.237 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.237 "name": "raid_bdev1", 00:13:37.237 "uuid": "28652b17-4cdc-4a52-94f1-fadcbb48b465", 00:13:37.237 "strip_size_kb": 64, 00:13:37.237 "state": "online", 00:13:37.237 "raid_level": "raid5f", 00:13:37.237 "superblock": true, 00:13:37.237 "num_base_bdevs": 3, 00:13:37.237 "num_base_bdevs_discovered": 2, 00:13:37.237 "num_base_bdevs_operational": 2, 00:13:37.237 "base_bdevs_list": [ 00:13:37.237 { 00:13:37.237 "name": null, 00:13:37.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.237 "is_configured": false, 00:13:37.237 "data_offset": 0, 00:13:37.237 "data_size": 63488 00:13:37.237 }, 00:13:37.237 { 00:13:37.237 "name": "BaseBdev2", 00:13:37.237 "uuid": "73c84bbc-4979-5fe3-8e3e-fab950499ee6", 00:13:37.237 "is_configured": true, 00:13:37.237 "data_offset": 2048, 00:13:37.237 "data_size": 63488 00:13:37.237 }, 00:13:37.237 { 00:13:37.237 "name": "BaseBdev3", 00:13:37.237 "uuid": "d5c4307b-ea11-5fb8-9a0c-3f19a9d90bd2", 00:13:37.237 "is_configured": true, 00:13:37.237 "data_offset": 2048, 00:13:37.237 "data_size": 63488 00:13:37.237 } 00:13:37.237 ] 00:13:37.237 }' 00:13:37.237 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.237 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.237 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.497 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.497 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 92187 00:13:37.497 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 92187 ']' 00:13:37.497 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 92187 00:13:37.497 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:37.498 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:37.498 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92187 00:13:37.498 killing process with pid 92187 00:13:37.498 Received shutdown signal, test time was about 60.000000 seconds 00:13:37.498 00:13:37.498 Latency(us) 00:13:37.498 [2024-11-27T21:46:00.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.498 [2024-11-27T21:46:00.619Z] =================================================================================================================== 00:13:37.498 [2024-11-27T21:46:00.619Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:37.498 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:37.498 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:37.498 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92187' 00:13:37.498 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 92187 00:13:37.498 [2024-11-27 21:46:00.409673] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:37.498 [2024-11-27 21:46:00.409784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.498 [2024-11-27 21:46:00.409863] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 92187 00:13:37.498 ee all in destruct 00:13:37.498 [2024-11-27 21:46:00.409873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:13:37.498 [2024-11-27 21:46:00.451608] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:37.758 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:37.758 00:13:37.758 real 0m21.232s 00:13:37.758 user 0m27.533s 00:13:37.758 sys 0m2.610s 00:13:37.758 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:37.758 ************************************ 00:13:37.758 END TEST raid5f_rebuild_test_sb 00:13:37.758 ************************************ 00:13:37.758 21:46:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.758 21:46:00 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:13:37.758 21:46:00 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:13:37.758 21:46:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:37.758 21:46:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:37.758 21:46:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:37.758 ************************************ 00:13:37.758 START TEST raid5f_state_function_test 00:13:37.758 ************************************ 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=92918 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 92918' 00:13:37.758 Process raid pid: 92918 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 92918 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 92918 ']' 00:13:37.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:37.758 21:46:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:37.758 [2024-11-27 21:46:00.819364] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:37.758 [2024-11-27 21:46:00.819501] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.018 [2024-11-27 21:46:00.974915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.018 [2024-11-27 21:46:00.999383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.018 [2024-11-27 21:46:01.040157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.018 [2024-11-27 21:46:01.040190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.587 [2024-11-27 21:46:01.641976] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:38.587 [2024-11-27 21:46:01.642088] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:38.587 [2024-11-27 21:46:01.642103] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:38.587 [2024-11-27 21:46:01.642113] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:38.587 [2024-11-27 21:46:01.642119] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:38.587 [2024-11-27 21:46:01.642132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:38.587 [2024-11-27 21:46:01.642138] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:38.587 [2024-11-27 21:46:01.642146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.587 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.587 "name": "Existed_Raid", 00:13:38.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.587 "strip_size_kb": 64, 00:13:38.587 "state": "configuring", 00:13:38.587 "raid_level": "raid5f", 00:13:38.587 "superblock": false, 00:13:38.587 "num_base_bdevs": 4, 00:13:38.587 "num_base_bdevs_discovered": 0, 00:13:38.587 "num_base_bdevs_operational": 4, 00:13:38.587 "base_bdevs_list": [ 00:13:38.587 { 00:13:38.587 "name": "BaseBdev1", 00:13:38.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.587 "is_configured": false, 00:13:38.587 "data_offset": 0, 00:13:38.587 "data_size": 0 00:13:38.587 }, 00:13:38.587 { 00:13:38.587 "name": "BaseBdev2", 00:13:38.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.587 "is_configured": false, 00:13:38.587 "data_offset": 0, 00:13:38.587 "data_size": 0 00:13:38.587 }, 00:13:38.587 { 00:13:38.587 "name": "BaseBdev3", 00:13:38.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.587 "is_configured": false, 00:13:38.587 "data_offset": 0, 00:13:38.587 "data_size": 0 00:13:38.587 }, 00:13:38.587 { 00:13:38.587 "name": "BaseBdev4", 00:13:38.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.587 "is_configured": false, 00:13:38.587 "data_offset": 0, 00:13:38.587 "data_size": 0 00:13:38.587 } 00:13:38.587 ] 00:13:38.587 }' 00:13:38.588 21:46:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.588 21:46:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.158 [2024-11-27 21:46:02.045223] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.158 [2024-11-27 21:46:02.045299] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.158 [2024-11-27 21:46:02.053236] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:39.158 [2024-11-27 21:46:02.053310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:39.158 [2024-11-27 21:46:02.053336] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.158 [2024-11-27 21:46:02.053358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.158 [2024-11-27 21:46:02.053375] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.158 [2024-11-27 21:46:02.053396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.158 [2024-11-27 21:46:02.053413] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:39.158 [2024-11-27 21:46:02.053433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.158 [2024-11-27 21:46:02.069931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.158 BaseBdev1 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.158 [ 00:13:39.158 { 00:13:39.158 "name": "BaseBdev1", 00:13:39.158 "aliases": [ 00:13:39.158 "41e21682-f744-4186-b021-1436f3b3130d" 00:13:39.158 ], 00:13:39.158 "product_name": "Malloc disk", 00:13:39.158 "block_size": 512, 00:13:39.158 "num_blocks": 65536, 00:13:39.158 "uuid": "41e21682-f744-4186-b021-1436f3b3130d", 00:13:39.158 "assigned_rate_limits": { 00:13:39.158 "rw_ios_per_sec": 0, 00:13:39.158 "rw_mbytes_per_sec": 0, 00:13:39.158 "r_mbytes_per_sec": 0, 00:13:39.158 "w_mbytes_per_sec": 0 00:13:39.158 }, 00:13:39.158 "claimed": true, 00:13:39.158 "claim_type": "exclusive_write", 00:13:39.158 "zoned": false, 00:13:39.158 "supported_io_types": { 00:13:39.158 "read": true, 00:13:39.158 "write": true, 00:13:39.158 "unmap": true, 00:13:39.158 "flush": true, 00:13:39.158 "reset": true, 00:13:39.158 "nvme_admin": false, 00:13:39.158 "nvme_io": false, 00:13:39.158 "nvme_io_md": false, 00:13:39.158 "write_zeroes": true, 00:13:39.158 "zcopy": true, 00:13:39.158 "get_zone_info": false, 00:13:39.158 "zone_management": false, 00:13:39.158 "zone_append": false, 00:13:39.158 "compare": false, 00:13:39.158 "compare_and_write": false, 00:13:39.158 "abort": true, 00:13:39.158 "seek_hole": false, 00:13:39.158 "seek_data": false, 00:13:39.158 "copy": true, 00:13:39.158 "nvme_iov_md": false 00:13:39.158 }, 00:13:39.158 "memory_domains": [ 00:13:39.158 { 00:13:39.158 "dma_device_id": "system", 00:13:39.158 "dma_device_type": 1 00:13:39.158 }, 00:13:39.158 { 00:13:39.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.158 "dma_device_type": 2 00:13:39.158 } 00:13:39.158 ], 00:13:39.158 "driver_specific": {} 00:13:39.158 } 00:13:39.158 ] 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.158 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.159 "name": "Existed_Raid", 00:13:39.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.159 "strip_size_kb": 64, 00:13:39.159 "state": "configuring", 00:13:39.159 "raid_level": "raid5f", 00:13:39.159 "superblock": false, 00:13:39.159 "num_base_bdevs": 4, 00:13:39.159 "num_base_bdevs_discovered": 1, 00:13:39.159 "num_base_bdevs_operational": 4, 00:13:39.159 "base_bdevs_list": [ 00:13:39.159 { 00:13:39.159 "name": "BaseBdev1", 00:13:39.159 "uuid": "41e21682-f744-4186-b021-1436f3b3130d", 00:13:39.159 "is_configured": true, 00:13:39.159 "data_offset": 0, 00:13:39.159 "data_size": 65536 00:13:39.159 }, 00:13:39.159 { 00:13:39.159 "name": "BaseBdev2", 00:13:39.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.159 "is_configured": false, 00:13:39.159 "data_offset": 0, 00:13:39.159 "data_size": 0 00:13:39.159 }, 00:13:39.159 { 00:13:39.159 "name": "BaseBdev3", 00:13:39.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.159 "is_configured": false, 00:13:39.159 "data_offset": 0, 00:13:39.159 "data_size": 0 00:13:39.159 }, 00:13:39.159 { 00:13:39.159 "name": "BaseBdev4", 00:13:39.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.159 "is_configured": false, 00:13:39.159 "data_offset": 0, 00:13:39.159 "data_size": 0 00:13:39.159 } 00:13:39.159 ] 00:13:39.159 }' 00:13:39.159 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.159 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.728 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:39.728 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.728 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.728 [2024-11-27 21:46:02.549236] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.728 [2024-11-27 21:46:02.549278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:39.728 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.728 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:39.728 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.728 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.728 [2024-11-27 21:46:02.557257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.728 [2024-11-27 21:46:02.559105] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:39.728 [2024-11-27 21:46:02.559148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:39.728 [2024-11-27 21:46:02.559157] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:39.728 [2024-11-27 21:46:02.559165] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:39.728 [2024-11-27 21:46:02.559171] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:39.728 [2024-11-27 21:46:02.559179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:39.728 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.728 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:39.728 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.729 "name": "Existed_Raid", 00:13:39.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.729 "strip_size_kb": 64, 00:13:39.729 "state": "configuring", 00:13:39.729 "raid_level": "raid5f", 00:13:39.729 "superblock": false, 00:13:39.729 "num_base_bdevs": 4, 00:13:39.729 "num_base_bdevs_discovered": 1, 00:13:39.729 "num_base_bdevs_operational": 4, 00:13:39.729 "base_bdevs_list": [ 00:13:39.729 { 00:13:39.729 "name": "BaseBdev1", 00:13:39.729 "uuid": "41e21682-f744-4186-b021-1436f3b3130d", 00:13:39.729 "is_configured": true, 00:13:39.729 "data_offset": 0, 00:13:39.729 "data_size": 65536 00:13:39.729 }, 00:13:39.729 { 00:13:39.729 "name": "BaseBdev2", 00:13:39.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.729 "is_configured": false, 00:13:39.729 "data_offset": 0, 00:13:39.729 "data_size": 0 00:13:39.729 }, 00:13:39.729 { 00:13:39.729 "name": "BaseBdev3", 00:13:39.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.729 "is_configured": false, 00:13:39.729 "data_offset": 0, 00:13:39.729 "data_size": 0 00:13:39.729 }, 00:13:39.729 { 00:13:39.729 "name": "BaseBdev4", 00:13:39.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.729 "is_configured": false, 00:13:39.729 "data_offset": 0, 00:13:39.729 "data_size": 0 00:13:39.729 } 00:13:39.729 ] 00:13:39.729 }' 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.729 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.989 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:39.989 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.989 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.989 [2024-11-27 21:46:02.991240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:39.989 BaseBdev2 00:13:39.989 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.989 21:46:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:39.989 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:39.989 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:39.989 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:39.989 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:39.989 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:39.989 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:39.989 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.989 21:46:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.989 [ 00:13:39.989 { 00:13:39.989 "name": "BaseBdev2", 00:13:39.989 "aliases": [ 00:13:39.989 "476ea6f8-a31d-4571-9822-e5e2b6384dde" 00:13:39.989 ], 00:13:39.989 "product_name": "Malloc disk", 00:13:39.989 "block_size": 512, 00:13:39.989 "num_blocks": 65536, 00:13:39.989 "uuid": "476ea6f8-a31d-4571-9822-e5e2b6384dde", 00:13:39.989 "assigned_rate_limits": { 00:13:39.989 "rw_ios_per_sec": 0, 00:13:39.989 "rw_mbytes_per_sec": 0, 00:13:39.989 "r_mbytes_per_sec": 0, 00:13:39.989 "w_mbytes_per_sec": 0 00:13:39.989 }, 00:13:39.989 "claimed": true, 00:13:39.989 "claim_type": "exclusive_write", 00:13:39.989 "zoned": false, 00:13:39.989 "supported_io_types": { 00:13:39.989 "read": true, 00:13:39.989 "write": true, 00:13:39.989 "unmap": true, 00:13:39.989 "flush": true, 00:13:39.989 "reset": true, 00:13:39.989 "nvme_admin": false, 00:13:39.989 "nvme_io": false, 00:13:39.989 "nvme_io_md": false, 00:13:39.989 "write_zeroes": true, 00:13:39.989 "zcopy": true, 00:13:39.989 "get_zone_info": false, 00:13:39.989 "zone_management": false, 00:13:39.989 "zone_append": false, 00:13:39.989 "compare": false, 00:13:39.989 "compare_and_write": false, 00:13:39.989 "abort": true, 00:13:39.989 "seek_hole": false, 00:13:39.989 "seek_data": false, 00:13:39.989 "copy": true, 00:13:39.989 "nvme_iov_md": false 00:13:39.989 }, 00:13:39.989 "memory_domains": [ 00:13:39.989 { 00:13:39.989 "dma_device_id": "system", 00:13:39.989 "dma_device_type": 1 00:13:39.989 }, 00:13:39.989 { 00:13:39.989 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.989 "dma_device_type": 2 00:13:39.989 } 00:13:39.989 ], 00:13:39.989 "driver_specific": {} 00:13:39.989 } 00:13:39.989 ] 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.989 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.990 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.990 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.990 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.990 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.990 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.990 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.990 "name": "Existed_Raid", 00:13:39.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.990 "strip_size_kb": 64, 00:13:39.990 "state": "configuring", 00:13:39.990 "raid_level": "raid5f", 00:13:39.990 "superblock": false, 00:13:39.990 "num_base_bdevs": 4, 00:13:39.990 "num_base_bdevs_discovered": 2, 00:13:39.990 "num_base_bdevs_operational": 4, 00:13:39.990 "base_bdevs_list": [ 00:13:39.990 { 00:13:39.990 "name": "BaseBdev1", 00:13:39.990 "uuid": "41e21682-f744-4186-b021-1436f3b3130d", 00:13:39.990 "is_configured": true, 00:13:39.990 "data_offset": 0, 00:13:39.990 "data_size": 65536 00:13:39.990 }, 00:13:39.990 { 00:13:39.990 "name": "BaseBdev2", 00:13:39.990 "uuid": "476ea6f8-a31d-4571-9822-e5e2b6384dde", 00:13:39.990 "is_configured": true, 00:13:39.990 "data_offset": 0, 00:13:39.990 "data_size": 65536 00:13:39.990 }, 00:13:39.990 { 00:13:39.990 "name": "BaseBdev3", 00:13:39.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.990 "is_configured": false, 00:13:39.990 "data_offset": 0, 00:13:39.990 "data_size": 0 00:13:39.990 }, 00:13:39.990 { 00:13:39.990 "name": "BaseBdev4", 00:13:39.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.990 "is_configured": false, 00:13:39.990 "data_offset": 0, 00:13:39.990 "data_size": 0 00:13:39.990 } 00:13:39.990 ] 00:13:39.990 }' 00:13:39.990 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.990 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.560 [2024-11-27 21:46:03.447986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:40.560 BaseBdev3 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.560 [ 00:13:40.560 { 00:13:40.560 "name": "BaseBdev3", 00:13:40.560 "aliases": [ 00:13:40.560 "7dd1a530-c51a-40df-9782-558ddc02735b" 00:13:40.560 ], 00:13:40.560 "product_name": "Malloc disk", 00:13:40.560 "block_size": 512, 00:13:40.560 "num_blocks": 65536, 00:13:40.560 "uuid": "7dd1a530-c51a-40df-9782-558ddc02735b", 00:13:40.560 "assigned_rate_limits": { 00:13:40.560 "rw_ios_per_sec": 0, 00:13:40.560 "rw_mbytes_per_sec": 0, 00:13:40.560 "r_mbytes_per_sec": 0, 00:13:40.560 "w_mbytes_per_sec": 0 00:13:40.560 }, 00:13:40.560 "claimed": true, 00:13:40.560 "claim_type": "exclusive_write", 00:13:40.560 "zoned": false, 00:13:40.560 "supported_io_types": { 00:13:40.560 "read": true, 00:13:40.560 "write": true, 00:13:40.560 "unmap": true, 00:13:40.560 "flush": true, 00:13:40.560 "reset": true, 00:13:40.560 "nvme_admin": false, 00:13:40.560 "nvme_io": false, 00:13:40.560 "nvme_io_md": false, 00:13:40.560 "write_zeroes": true, 00:13:40.560 "zcopy": true, 00:13:40.560 "get_zone_info": false, 00:13:40.560 "zone_management": false, 00:13:40.560 "zone_append": false, 00:13:40.560 "compare": false, 00:13:40.560 "compare_and_write": false, 00:13:40.560 "abort": true, 00:13:40.560 "seek_hole": false, 00:13:40.560 "seek_data": false, 00:13:40.560 "copy": true, 00:13:40.560 "nvme_iov_md": false 00:13:40.560 }, 00:13:40.560 "memory_domains": [ 00:13:40.560 { 00:13:40.560 "dma_device_id": "system", 00:13:40.560 "dma_device_type": 1 00:13:40.560 }, 00:13:40.560 { 00:13:40.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.560 "dma_device_type": 2 00:13:40.560 } 00:13:40.560 ], 00:13:40.560 "driver_specific": {} 00:13:40.560 } 00:13:40.560 ] 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.560 "name": "Existed_Raid", 00:13:40.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.560 "strip_size_kb": 64, 00:13:40.560 "state": "configuring", 00:13:40.560 "raid_level": "raid5f", 00:13:40.560 "superblock": false, 00:13:40.560 "num_base_bdevs": 4, 00:13:40.560 "num_base_bdevs_discovered": 3, 00:13:40.560 "num_base_bdevs_operational": 4, 00:13:40.560 "base_bdevs_list": [ 00:13:40.560 { 00:13:40.560 "name": "BaseBdev1", 00:13:40.560 "uuid": "41e21682-f744-4186-b021-1436f3b3130d", 00:13:40.560 "is_configured": true, 00:13:40.560 "data_offset": 0, 00:13:40.560 "data_size": 65536 00:13:40.560 }, 00:13:40.560 { 00:13:40.560 "name": "BaseBdev2", 00:13:40.560 "uuid": "476ea6f8-a31d-4571-9822-e5e2b6384dde", 00:13:40.560 "is_configured": true, 00:13:40.560 "data_offset": 0, 00:13:40.560 "data_size": 65536 00:13:40.560 }, 00:13:40.560 { 00:13:40.560 "name": "BaseBdev3", 00:13:40.560 "uuid": "7dd1a530-c51a-40df-9782-558ddc02735b", 00:13:40.560 "is_configured": true, 00:13:40.560 "data_offset": 0, 00:13:40.560 "data_size": 65536 00:13:40.560 }, 00:13:40.560 { 00:13:40.560 "name": "BaseBdev4", 00:13:40.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.560 "is_configured": false, 00:13:40.560 "data_offset": 0, 00:13:40.560 "data_size": 0 00:13:40.560 } 00:13:40.560 ] 00:13:40.560 }' 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.560 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.821 [2024-11-27 21:46:03.838006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:40.821 [2024-11-27 21:46:03.838146] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:40.821 [2024-11-27 21:46:03.838182] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:40.821 [2024-11-27 21:46:03.838512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:40.821 [2024-11-27 21:46:03.839062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:40.821 [2024-11-27 21:46:03.839113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:40.821 [2024-11-27 21:46:03.839387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.821 BaseBdev4 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.821 [ 00:13:40.821 { 00:13:40.821 "name": "BaseBdev4", 00:13:40.821 "aliases": [ 00:13:40.821 "55788bd2-12cb-437b-a435-3f70216043b9" 00:13:40.821 ], 00:13:40.821 "product_name": "Malloc disk", 00:13:40.821 "block_size": 512, 00:13:40.821 "num_blocks": 65536, 00:13:40.821 "uuid": "55788bd2-12cb-437b-a435-3f70216043b9", 00:13:40.821 "assigned_rate_limits": { 00:13:40.821 "rw_ios_per_sec": 0, 00:13:40.821 "rw_mbytes_per_sec": 0, 00:13:40.821 "r_mbytes_per_sec": 0, 00:13:40.821 "w_mbytes_per_sec": 0 00:13:40.821 }, 00:13:40.821 "claimed": true, 00:13:40.821 "claim_type": "exclusive_write", 00:13:40.821 "zoned": false, 00:13:40.821 "supported_io_types": { 00:13:40.821 "read": true, 00:13:40.821 "write": true, 00:13:40.821 "unmap": true, 00:13:40.821 "flush": true, 00:13:40.821 "reset": true, 00:13:40.821 "nvme_admin": false, 00:13:40.821 "nvme_io": false, 00:13:40.821 "nvme_io_md": false, 00:13:40.821 "write_zeroes": true, 00:13:40.821 "zcopy": true, 00:13:40.821 "get_zone_info": false, 00:13:40.821 "zone_management": false, 00:13:40.821 "zone_append": false, 00:13:40.821 "compare": false, 00:13:40.821 "compare_and_write": false, 00:13:40.821 "abort": true, 00:13:40.821 "seek_hole": false, 00:13:40.821 "seek_data": false, 00:13:40.821 "copy": true, 00:13:40.821 "nvme_iov_md": false 00:13:40.821 }, 00:13:40.821 "memory_domains": [ 00:13:40.821 { 00:13:40.821 "dma_device_id": "system", 00:13:40.821 "dma_device_type": 1 00:13:40.821 }, 00:13:40.821 { 00:13:40.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:40.821 "dma_device_type": 2 00:13:40.821 } 00:13:40.821 ], 00:13:40.821 "driver_specific": {} 00:13:40.821 } 00:13:40.821 ] 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.821 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.821 "name": "Existed_Raid", 00:13:40.822 "uuid": "fd7ea42b-f76e-4266-bd0a-311538faa996", 00:13:40.822 "strip_size_kb": 64, 00:13:40.822 "state": "online", 00:13:40.822 "raid_level": "raid5f", 00:13:40.822 "superblock": false, 00:13:40.822 "num_base_bdevs": 4, 00:13:40.822 "num_base_bdevs_discovered": 4, 00:13:40.822 "num_base_bdevs_operational": 4, 00:13:40.822 "base_bdevs_list": [ 00:13:40.822 { 00:13:40.822 "name": "BaseBdev1", 00:13:40.822 "uuid": "41e21682-f744-4186-b021-1436f3b3130d", 00:13:40.822 "is_configured": true, 00:13:40.822 "data_offset": 0, 00:13:40.822 "data_size": 65536 00:13:40.822 }, 00:13:40.822 { 00:13:40.822 "name": "BaseBdev2", 00:13:40.822 "uuid": "476ea6f8-a31d-4571-9822-e5e2b6384dde", 00:13:40.822 "is_configured": true, 00:13:40.822 "data_offset": 0, 00:13:40.822 "data_size": 65536 00:13:40.822 }, 00:13:40.822 { 00:13:40.822 "name": "BaseBdev3", 00:13:40.822 "uuid": "7dd1a530-c51a-40df-9782-558ddc02735b", 00:13:40.822 "is_configured": true, 00:13:40.822 "data_offset": 0, 00:13:40.822 "data_size": 65536 00:13:40.822 }, 00:13:40.822 { 00:13:40.822 "name": "BaseBdev4", 00:13:40.822 "uuid": "55788bd2-12cb-437b-a435-3f70216043b9", 00:13:40.822 "is_configured": true, 00:13:40.822 "data_offset": 0, 00:13:40.822 "data_size": 65536 00:13:40.822 } 00:13:40.822 ] 00:13:40.822 }' 00:13:40.822 21:46:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.822 21:46:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.391 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:41.391 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:41.391 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:41.391 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:41.391 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:41.391 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:41.391 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:41.391 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:41.391 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.391 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.391 [2024-11-27 21:46:04.313419] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.391 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.391 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:41.391 "name": "Existed_Raid", 00:13:41.391 "aliases": [ 00:13:41.391 "fd7ea42b-f76e-4266-bd0a-311538faa996" 00:13:41.391 ], 00:13:41.391 "product_name": "Raid Volume", 00:13:41.391 "block_size": 512, 00:13:41.391 "num_blocks": 196608, 00:13:41.391 "uuid": "fd7ea42b-f76e-4266-bd0a-311538faa996", 00:13:41.391 "assigned_rate_limits": { 00:13:41.391 "rw_ios_per_sec": 0, 00:13:41.391 "rw_mbytes_per_sec": 0, 00:13:41.391 "r_mbytes_per_sec": 0, 00:13:41.391 "w_mbytes_per_sec": 0 00:13:41.391 }, 00:13:41.391 "claimed": false, 00:13:41.391 "zoned": false, 00:13:41.391 "supported_io_types": { 00:13:41.391 "read": true, 00:13:41.391 "write": true, 00:13:41.391 "unmap": false, 00:13:41.391 "flush": false, 00:13:41.391 "reset": true, 00:13:41.391 "nvme_admin": false, 00:13:41.391 "nvme_io": false, 00:13:41.391 "nvme_io_md": false, 00:13:41.391 "write_zeroes": true, 00:13:41.391 "zcopy": false, 00:13:41.391 "get_zone_info": false, 00:13:41.391 "zone_management": false, 00:13:41.391 "zone_append": false, 00:13:41.392 "compare": false, 00:13:41.392 "compare_and_write": false, 00:13:41.392 "abort": false, 00:13:41.392 "seek_hole": false, 00:13:41.392 "seek_data": false, 00:13:41.392 "copy": false, 00:13:41.392 "nvme_iov_md": false 00:13:41.392 }, 00:13:41.392 "driver_specific": { 00:13:41.392 "raid": { 00:13:41.392 "uuid": "fd7ea42b-f76e-4266-bd0a-311538faa996", 00:13:41.392 "strip_size_kb": 64, 00:13:41.392 "state": "online", 00:13:41.392 "raid_level": "raid5f", 00:13:41.392 "superblock": false, 00:13:41.392 "num_base_bdevs": 4, 00:13:41.392 "num_base_bdevs_discovered": 4, 00:13:41.392 "num_base_bdevs_operational": 4, 00:13:41.392 "base_bdevs_list": [ 00:13:41.392 { 00:13:41.392 "name": "BaseBdev1", 00:13:41.392 "uuid": "41e21682-f744-4186-b021-1436f3b3130d", 00:13:41.392 "is_configured": true, 00:13:41.392 "data_offset": 0, 00:13:41.392 "data_size": 65536 00:13:41.392 }, 00:13:41.392 { 00:13:41.392 "name": "BaseBdev2", 00:13:41.392 "uuid": "476ea6f8-a31d-4571-9822-e5e2b6384dde", 00:13:41.392 "is_configured": true, 00:13:41.392 "data_offset": 0, 00:13:41.392 "data_size": 65536 00:13:41.392 }, 00:13:41.392 { 00:13:41.392 "name": "BaseBdev3", 00:13:41.392 "uuid": "7dd1a530-c51a-40df-9782-558ddc02735b", 00:13:41.392 "is_configured": true, 00:13:41.392 "data_offset": 0, 00:13:41.392 "data_size": 65536 00:13:41.392 }, 00:13:41.392 { 00:13:41.392 "name": "BaseBdev4", 00:13:41.392 "uuid": "55788bd2-12cb-437b-a435-3f70216043b9", 00:13:41.392 "is_configured": true, 00:13:41.392 "data_offset": 0, 00:13:41.392 "data_size": 65536 00:13:41.392 } 00:13:41.392 ] 00:13:41.392 } 00:13:41.392 } 00:13:41.392 }' 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:41.392 BaseBdev2 00:13:41.392 BaseBdev3 00:13:41.392 BaseBdev4' 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.392 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:41.652 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.653 [2024-11-27 21:46:04.640721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.653 "name": "Existed_Raid", 00:13:41.653 "uuid": "fd7ea42b-f76e-4266-bd0a-311538faa996", 00:13:41.653 "strip_size_kb": 64, 00:13:41.653 "state": "online", 00:13:41.653 "raid_level": "raid5f", 00:13:41.653 "superblock": false, 00:13:41.653 "num_base_bdevs": 4, 00:13:41.653 "num_base_bdevs_discovered": 3, 00:13:41.653 "num_base_bdevs_operational": 3, 00:13:41.653 "base_bdevs_list": [ 00:13:41.653 { 00:13:41.653 "name": null, 00:13:41.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.653 "is_configured": false, 00:13:41.653 "data_offset": 0, 00:13:41.653 "data_size": 65536 00:13:41.653 }, 00:13:41.653 { 00:13:41.653 "name": "BaseBdev2", 00:13:41.653 "uuid": "476ea6f8-a31d-4571-9822-e5e2b6384dde", 00:13:41.653 "is_configured": true, 00:13:41.653 "data_offset": 0, 00:13:41.653 "data_size": 65536 00:13:41.653 }, 00:13:41.653 { 00:13:41.653 "name": "BaseBdev3", 00:13:41.653 "uuid": "7dd1a530-c51a-40df-9782-558ddc02735b", 00:13:41.653 "is_configured": true, 00:13:41.653 "data_offset": 0, 00:13:41.653 "data_size": 65536 00:13:41.653 }, 00:13:41.653 { 00:13:41.653 "name": "BaseBdev4", 00:13:41.653 "uuid": "55788bd2-12cb-437b-a435-3f70216043b9", 00:13:41.653 "is_configured": true, 00:13:41.653 "data_offset": 0, 00:13:41.653 "data_size": 65536 00:13:41.653 } 00:13:41.653 ] 00:13:41.653 }' 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.653 21:46:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.223 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:42.223 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.223 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.223 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.223 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.223 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:42.223 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.223 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.224 [2024-11-27 21:46:05.103006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:42.224 [2024-11-27 21:46:05.103097] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:42.224 [2024-11-27 21:46:05.114056] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.224 [2024-11-27 21:46:05.173977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.224 [2024-11-27 21:46:05.244562] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:42.224 [2024-11-27 21:46:05.244604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.224 BaseBdev2 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.224 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.485 [ 00:13:42.485 { 00:13:42.485 "name": "BaseBdev2", 00:13:42.485 "aliases": [ 00:13:42.485 "d71ac939-3c3b-49ae-be1e-8e3d728ae83f" 00:13:42.485 ], 00:13:42.485 "product_name": "Malloc disk", 00:13:42.485 "block_size": 512, 00:13:42.485 "num_blocks": 65536, 00:13:42.485 "uuid": "d71ac939-3c3b-49ae-be1e-8e3d728ae83f", 00:13:42.485 "assigned_rate_limits": { 00:13:42.485 "rw_ios_per_sec": 0, 00:13:42.485 "rw_mbytes_per_sec": 0, 00:13:42.485 "r_mbytes_per_sec": 0, 00:13:42.485 "w_mbytes_per_sec": 0 00:13:42.485 }, 00:13:42.485 "claimed": false, 00:13:42.485 "zoned": false, 00:13:42.485 "supported_io_types": { 00:13:42.485 "read": true, 00:13:42.485 "write": true, 00:13:42.485 "unmap": true, 00:13:42.485 "flush": true, 00:13:42.485 "reset": true, 00:13:42.485 "nvme_admin": false, 00:13:42.485 "nvme_io": false, 00:13:42.485 "nvme_io_md": false, 00:13:42.485 "write_zeroes": true, 00:13:42.485 "zcopy": true, 00:13:42.485 "get_zone_info": false, 00:13:42.485 "zone_management": false, 00:13:42.485 "zone_append": false, 00:13:42.485 "compare": false, 00:13:42.485 "compare_and_write": false, 00:13:42.485 "abort": true, 00:13:42.485 "seek_hole": false, 00:13:42.485 "seek_data": false, 00:13:42.485 "copy": true, 00:13:42.485 "nvme_iov_md": false 00:13:42.485 }, 00:13:42.485 "memory_domains": [ 00:13:42.485 { 00:13:42.485 "dma_device_id": "system", 00:13:42.485 "dma_device_type": 1 00:13:42.485 }, 00:13:42.485 { 00:13:42.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.485 "dma_device_type": 2 00:13:42.485 } 00:13:42.485 ], 00:13:42.485 "driver_specific": {} 00:13:42.485 } 00:13:42.485 ] 00:13:42.485 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.485 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:42.485 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:42.485 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:42.485 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:42.485 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.485 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.485 BaseBdev3 00:13:42.485 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.485 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:42.485 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:42.485 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.486 [ 00:13:42.486 { 00:13:42.486 "name": "BaseBdev3", 00:13:42.486 "aliases": [ 00:13:42.486 "7535bfae-b53c-45e4-8e67-5fd4c72b5af8" 00:13:42.486 ], 00:13:42.486 "product_name": "Malloc disk", 00:13:42.486 "block_size": 512, 00:13:42.486 "num_blocks": 65536, 00:13:42.486 "uuid": "7535bfae-b53c-45e4-8e67-5fd4c72b5af8", 00:13:42.486 "assigned_rate_limits": { 00:13:42.486 "rw_ios_per_sec": 0, 00:13:42.486 "rw_mbytes_per_sec": 0, 00:13:42.486 "r_mbytes_per_sec": 0, 00:13:42.486 "w_mbytes_per_sec": 0 00:13:42.486 }, 00:13:42.486 "claimed": false, 00:13:42.486 "zoned": false, 00:13:42.486 "supported_io_types": { 00:13:42.486 "read": true, 00:13:42.486 "write": true, 00:13:42.486 "unmap": true, 00:13:42.486 "flush": true, 00:13:42.486 "reset": true, 00:13:42.486 "nvme_admin": false, 00:13:42.486 "nvme_io": false, 00:13:42.486 "nvme_io_md": false, 00:13:42.486 "write_zeroes": true, 00:13:42.486 "zcopy": true, 00:13:42.486 "get_zone_info": false, 00:13:42.486 "zone_management": false, 00:13:42.486 "zone_append": false, 00:13:42.486 "compare": false, 00:13:42.486 "compare_and_write": false, 00:13:42.486 "abort": true, 00:13:42.486 "seek_hole": false, 00:13:42.486 "seek_data": false, 00:13:42.486 "copy": true, 00:13:42.486 "nvme_iov_md": false 00:13:42.486 }, 00:13:42.486 "memory_domains": [ 00:13:42.486 { 00:13:42.486 "dma_device_id": "system", 00:13:42.486 "dma_device_type": 1 00:13:42.486 }, 00:13:42.486 { 00:13:42.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.486 "dma_device_type": 2 00:13:42.486 } 00:13:42.486 ], 00:13:42.486 "driver_specific": {} 00:13:42.486 } 00:13:42.486 ] 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.486 BaseBdev4 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.486 [ 00:13:42.486 { 00:13:42.486 "name": "BaseBdev4", 00:13:42.486 "aliases": [ 00:13:42.486 "ba07e27e-afa3-4510-83fc-f48f06dfa5a8" 00:13:42.486 ], 00:13:42.486 "product_name": "Malloc disk", 00:13:42.486 "block_size": 512, 00:13:42.486 "num_blocks": 65536, 00:13:42.486 "uuid": "ba07e27e-afa3-4510-83fc-f48f06dfa5a8", 00:13:42.486 "assigned_rate_limits": { 00:13:42.486 "rw_ios_per_sec": 0, 00:13:42.486 "rw_mbytes_per_sec": 0, 00:13:42.486 "r_mbytes_per_sec": 0, 00:13:42.486 "w_mbytes_per_sec": 0 00:13:42.486 }, 00:13:42.486 "claimed": false, 00:13:42.486 "zoned": false, 00:13:42.486 "supported_io_types": { 00:13:42.486 "read": true, 00:13:42.486 "write": true, 00:13:42.486 "unmap": true, 00:13:42.486 "flush": true, 00:13:42.486 "reset": true, 00:13:42.486 "nvme_admin": false, 00:13:42.486 "nvme_io": false, 00:13:42.486 "nvme_io_md": false, 00:13:42.486 "write_zeroes": true, 00:13:42.486 "zcopy": true, 00:13:42.486 "get_zone_info": false, 00:13:42.486 "zone_management": false, 00:13:42.486 "zone_append": false, 00:13:42.486 "compare": false, 00:13:42.486 "compare_and_write": false, 00:13:42.486 "abort": true, 00:13:42.486 "seek_hole": false, 00:13:42.486 "seek_data": false, 00:13:42.486 "copy": true, 00:13:42.486 "nvme_iov_md": false 00:13:42.486 }, 00:13:42.486 "memory_domains": [ 00:13:42.486 { 00:13:42.486 "dma_device_id": "system", 00:13:42.486 "dma_device_type": 1 00:13:42.486 }, 00:13:42.486 { 00:13:42.486 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.486 "dma_device_type": 2 00:13:42.486 } 00:13:42.486 ], 00:13:42.486 "driver_specific": {} 00:13:42.486 } 00:13:42.486 ] 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.486 [2024-11-27 21:46:05.475348] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:42.486 [2024-11-27 21:46:05.475390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:42.486 [2024-11-27 21:46:05.475430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:42.486 [2024-11-27 21:46:05.477258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:42.486 [2024-11-27 21:46:05.477344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.486 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.486 "name": "Existed_Raid", 00:13:42.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.486 "strip_size_kb": 64, 00:13:42.486 "state": "configuring", 00:13:42.486 "raid_level": "raid5f", 00:13:42.486 "superblock": false, 00:13:42.486 "num_base_bdevs": 4, 00:13:42.486 "num_base_bdevs_discovered": 3, 00:13:42.486 "num_base_bdevs_operational": 4, 00:13:42.486 "base_bdevs_list": [ 00:13:42.486 { 00:13:42.486 "name": "BaseBdev1", 00:13:42.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.486 "is_configured": false, 00:13:42.486 "data_offset": 0, 00:13:42.486 "data_size": 0 00:13:42.486 }, 00:13:42.486 { 00:13:42.486 "name": "BaseBdev2", 00:13:42.486 "uuid": "d71ac939-3c3b-49ae-be1e-8e3d728ae83f", 00:13:42.486 "is_configured": true, 00:13:42.486 "data_offset": 0, 00:13:42.486 "data_size": 65536 00:13:42.486 }, 00:13:42.487 { 00:13:42.487 "name": "BaseBdev3", 00:13:42.487 "uuid": "7535bfae-b53c-45e4-8e67-5fd4c72b5af8", 00:13:42.487 "is_configured": true, 00:13:42.487 "data_offset": 0, 00:13:42.487 "data_size": 65536 00:13:42.487 }, 00:13:42.487 { 00:13:42.487 "name": "BaseBdev4", 00:13:42.487 "uuid": "ba07e27e-afa3-4510-83fc-f48f06dfa5a8", 00:13:42.487 "is_configured": true, 00:13:42.487 "data_offset": 0, 00:13:42.487 "data_size": 65536 00:13:42.487 } 00:13:42.487 ] 00:13:42.487 }' 00:13:42.487 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.487 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.056 [2024-11-27 21:46:05.894606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.056 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.056 "name": "Existed_Raid", 00:13:43.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.056 "strip_size_kb": 64, 00:13:43.056 "state": "configuring", 00:13:43.056 "raid_level": "raid5f", 00:13:43.056 "superblock": false, 00:13:43.056 "num_base_bdevs": 4, 00:13:43.056 "num_base_bdevs_discovered": 2, 00:13:43.056 "num_base_bdevs_operational": 4, 00:13:43.056 "base_bdevs_list": [ 00:13:43.056 { 00:13:43.056 "name": "BaseBdev1", 00:13:43.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.056 "is_configured": false, 00:13:43.056 "data_offset": 0, 00:13:43.056 "data_size": 0 00:13:43.056 }, 00:13:43.056 { 00:13:43.056 "name": null, 00:13:43.056 "uuid": "d71ac939-3c3b-49ae-be1e-8e3d728ae83f", 00:13:43.056 "is_configured": false, 00:13:43.056 "data_offset": 0, 00:13:43.056 "data_size": 65536 00:13:43.056 }, 00:13:43.056 { 00:13:43.056 "name": "BaseBdev3", 00:13:43.056 "uuid": "7535bfae-b53c-45e4-8e67-5fd4c72b5af8", 00:13:43.056 "is_configured": true, 00:13:43.056 "data_offset": 0, 00:13:43.056 "data_size": 65536 00:13:43.056 }, 00:13:43.056 { 00:13:43.056 "name": "BaseBdev4", 00:13:43.057 "uuid": "ba07e27e-afa3-4510-83fc-f48f06dfa5a8", 00:13:43.057 "is_configured": true, 00:13:43.057 "data_offset": 0, 00:13:43.057 "data_size": 65536 00:13:43.057 } 00:13:43.057 ] 00:13:43.057 }' 00:13:43.057 21:46:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.057 21:46:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.316 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.316 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:43.316 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.316 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.316 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.317 [2024-11-27 21:46:06.372525] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:43.317 BaseBdev1 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.317 [ 00:13:43.317 { 00:13:43.317 "name": "BaseBdev1", 00:13:43.317 "aliases": [ 00:13:43.317 "8cf51068-c783-430e-bde1-8132b4575aed" 00:13:43.317 ], 00:13:43.317 "product_name": "Malloc disk", 00:13:43.317 "block_size": 512, 00:13:43.317 "num_blocks": 65536, 00:13:43.317 "uuid": "8cf51068-c783-430e-bde1-8132b4575aed", 00:13:43.317 "assigned_rate_limits": { 00:13:43.317 "rw_ios_per_sec": 0, 00:13:43.317 "rw_mbytes_per_sec": 0, 00:13:43.317 "r_mbytes_per_sec": 0, 00:13:43.317 "w_mbytes_per_sec": 0 00:13:43.317 }, 00:13:43.317 "claimed": true, 00:13:43.317 "claim_type": "exclusive_write", 00:13:43.317 "zoned": false, 00:13:43.317 "supported_io_types": { 00:13:43.317 "read": true, 00:13:43.317 "write": true, 00:13:43.317 "unmap": true, 00:13:43.317 "flush": true, 00:13:43.317 "reset": true, 00:13:43.317 "nvme_admin": false, 00:13:43.317 "nvme_io": false, 00:13:43.317 "nvme_io_md": false, 00:13:43.317 "write_zeroes": true, 00:13:43.317 "zcopy": true, 00:13:43.317 "get_zone_info": false, 00:13:43.317 "zone_management": false, 00:13:43.317 "zone_append": false, 00:13:43.317 "compare": false, 00:13:43.317 "compare_and_write": false, 00:13:43.317 "abort": true, 00:13:43.317 "seek_hole": false, 00:13:43.317 "seek_data": false, 00:13:43.317 "copy": true, 00:13:43.317 "nvme_iov_md": false 00:13:43.317 }, 00:13:43.317 "memory_domains": [ 00:13:43.317 { 00:13:43.317 "dma_device_id": "system", 00:13:43.317 "dma_device_type": 1 00:13:43.317 }, 00:13:43.317 { 00:13:43.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.317 "dma_device_type": 2 00:13:43.317 } 00:13:43.317 ], 00:13:43.317 "driver_specific": {} 00:13:43.317 } 00:13:43.317 ] 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.317 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.577 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.578 "name": "Existed_Raid", 00:13:43.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.578 "strip_size_kb": 64, 00:13:43.578 "state": "configuring", 00:13:43.578 "raid_level": "raid5f", 00:13:43.578 "superblock": false, 00:13:43.578 "num_base_bdevs": 4, 00:13:43.578 "num_base_bdevs_discovered": 3, 00:13:43.578 "num_base_bdevs_operational": 4, 00:13:43.578 "base_bdevs_list": [ 00:13:43.578 { 00:13:43.578 "name": "BaseBdev1", 00:13:43.578 "uuid": "8cf51068-c783-430e-bde1-8132b4575aed", 00:13:43.578 "is_configured": true, 00:13:43.578 "data_offset": 0, 00:13:43.578 "data_size": 65536 00:13:43.578 }, 00:13:43.578 { 00:13:43.578 "name": null, 00:13:43.578 "uuid": "d71ac939-3c3b-49ae-be1e-8e3d728ae83f", 00:13:43.578 "is_configured": false, 00:13:43.578 "data_offset": 0, 00:13:43.578 "data_size": 65536 00:13:43.578 }, 00:13:43.578 { 00:13:43.578 "name": "BaseBdev3", 00:13:43.578 "uuid": "7535bfae-b53c-45e4-8e67-5fd4c72b5af8", 00:13:43.578 "is_configured": true, 00:13:43.578 "data_offset": 0, 00:13:43.578 "data_size": 65536 00:13:43.578 }, 00:13:43.578 { 00:13:43.578 "name": "BaseBdev4", 00:13:43.578 "uuid": "ba07e27e-afa3-4510-83fc-f48f06dfa5a8", 00:13:43.578 "is_configured": true, 00:13:43.578 "data_offset": 0, 00:13:43.578 "data_size": 65536 00:13:43.578 } 00:13:43.578 ] 00:13:43.578 }' 00:13:43.578 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.578 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.838 [2024-11-27 21:46:06.871811] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.838 "name": "Existed_Raid", 00:13:43.838 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.838 "strip_size_kb": 64, 00:13:43.838 "state": "configuring", 00:13:43.838 "raid_level": "raid5f", 00:13:43.838 "superblock": false, 00:13:43.838 "num_base_bdevs": 4, 00:13:43.838 "num_base_bdevs_discovered": 2, 00:13:43.838 "num_base_bdevs_operational": 4, 00:13:43.838 "base_bdevs_list": [ 00:13:43.838 { 00:13:43.838 "name": "BaseBdev1", 00:13:43.838 "uuid": "8cf51068-c783-430e-bde1-8132b4575aed", 00:13:43.838 "is_configured": true, 00:13:43.838 "data_offset": 0, 00:13:43.838 "data_size": 65536 00:13:43.838 }, 00:13:43.838 { 00:13:43.838 "name": null, 00:13:43.838 "uuid": "d71ac939-3c3b-49ae-be1e-8e3d728ae83f", 00:13:43.838 "is_configured": false, 00:13:43.838 "data_offset": 0, 00:13:43.838 "data_size": 65536 00:13:43.838 }, 00:13:43.838 { 00:13:43.838 "name": null, 00:13:43.838 "uuid": "7535bfae-b53c-45e4-8e67-5fd4c72b5af8", 00:13:43.838 "is_configured": false, 00:13:43.838 "data_offset": 0, 00:13:43.838 "data_size": 65536 00:13:43.838 }, 00:13:43.838 { 00:13:43.838 "name": "BaseBdev4", 00:13:43.838 "uuid": "ba07e27e-afa3-4510-83fc-f48f06dfa5a8", 00:13:43.838 "is_configured": true, 00:13:43.838 "data_offset": 0, 00:13:43.838 "data_size": 65536 00:13:43.838 } 00:13:43.838 ] 00:13:43.838 }' 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.838 21:46:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.408 [2024-11-27 21:46:07.366980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.408 "name": "Existed_Raid", 00:13:44.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.408 "strip_size_kb": 64, 00:13:44.408 "state": "configuring", 00:13:44.408 "raid_level": "raid5f", 00:13:44.408 "superblock": false, 00:13:44.408 "num_base_bdevs": 4, 00:13:44.408 "num_base_bdevs_discovered": 3, 00:13:44.408 "num_base_bdevs_operational": 4, 00:13:44.408 "base_bdevs_list": [ 00:13:44.408 { 00:13:44.408 "name": "BaseBdev1", 00:13:44.408 "uuid": "8cf51068-c783-430e-bde1-8132b4575aed", 00:13:44.408 "is_configured": true, 00:13:44.408 "data_offset": 0, 00:13:44.408 "data_size": 65536 00:13:44.408 }, 00:13:44.408 { 00:13:44.408 "name": null, 00:13:44.408 "uuid": "d71ac939-3c3b-49ae-be1e-8e3d728ae83f", 00:13:44.408 "is_configured": false, 00:13:44.408 "data_offset": 0, 00:13:44.408 "data_size": 65536 00:13:44.408 }, 00:13:44.408 { 00:13:44.408 "name": "BaseBdev3", 00:13:44.408 "uuid": "7535bfae-b53c-45e4-8e67-5fd4c72b5af8", 00:13:44.408 "is_configured": true, 00:13:44.408 "data_offset": 0, 00:13:44.408 "data_size": 65536 00:13:44.408 }, 00:13:44.408 { 00:13:44.408 "name": "BaseBdev4", 00:13:44.408 "uuid": "ba07e27e-afa3-4510-83fc-f48f06dfa5a8", 00:13:44.408 "is_configured": true, 00:13:44.408 "data_offset": 0, 00:13:44.408 "data_size": 65536 00:13:44.408 } 00:13:44.408 ] 00:13:44.408 }' 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.408 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.978 [2024-11-27 21:46:07.862154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:44.978 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:44.979 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:44.979 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:44.979 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:44.979 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.979 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:44.979 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.979 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.979 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.979 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:44.979 "name": "Existed_Raid", 00:13:44.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.979 "strip_size_kb": 64, 00:13:44.979 "state": "configuring", 00:13:44.979 "raid_level": "raid5f", 00:13:44.979 "superblock": false, 00:13:44.979 "num_base_bdevs": 4, 00:13:44.979 "num_base_bdevs_discovered": 2, 00:13:44.979 "num_base_bdevs_operational": 4, 00:13:44.979 "base_bdevs_list": [ 00:13:44.979 { 00:13:44.979 "name": null, 00:13:44.979 "uuid": "8cf51068-c783-430e-bde1-8132b4575aed", 00:13:44.979 "is_configured": false, 00:13:44.979 "data_offset": 0, 00:13:44.979 "data_size": 65536 00:13:44.979 }, 00:13:44.979 { 00:13:44.979 "name": null, 00:13:44.979 "uuid": "d71ac939-3c3b-49ae-be1e-8e3d728ae83f", 00:13:44.979 "is_configured": false, 00:13:44.979 "data_offset": 0, 00:13:44.979 "data_size": 65536 00:13:44.979 }, 00:13:44.979 { 00:13:44.979 "name": "BaseBdev3", 00:13:44.979 "uuid": "7535bfae-b53c-45e4-8e67-5fd4c72b5af8", 00:13:44.979 "is_configured": true, 00:13:44.979 "data_offset": 0, 00:13:44.979 "data_size": 65536 00:13:44.979 }, 00:13:44.979 { 00:13:44.979 "name": "BaseBdev4", 00:13:44.979 "uuid": "ba07e27e-afa3-4510-83fc-f48f06dfa5a8", 00:13:44.979 "is_configured": true, 00:13:44.979 "data_offset": 0, 00:13:44.979 "data_size": 65536 00:13:44.979 } 00:13:44.979 ] 00:13:44.979 }' 00:13:44.979 21:46:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:44.979 21:46:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.548 [2024-11-27 21:46:08.403553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.548 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.549 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:45.549 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.549 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.549 "name": "Existed_Raid", 00:13:45.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.549 "strip_size_kb": 64, 00:13:45.549 "state": "configuring", 00:13:45.549 "raid_level": "raid5f", 00:13:45.549 "superblock": false, 00:13:45.549 "num_base_bdevs": 4, 00:13:45.549 "num_base_bdevs_discovered": 3, 00:13:45.549 "num_base_bdevs_operational": 4, 00:13:45.549 "base_bdevs_list": [ 00:13:45.549 { 00:13:45.549 "name": null, 00:13:45.549 "uuid": "8cf51068-c783-430e-bde1-8132b4575aed", 00:13:45.549 "is_configured": false, 00:13:45.549 "data_offset": 0, 00:13:45.549 "data_size": 65536 00:13:45.549 }, 00:13:45.549 { 00:13:45.549 "name": "BaseBdev2", 00:13:45.549 "uuid": "d71ac939-3c3b-49ae-be1e-8e3d728ae83f", 00:13:45.549 "is_configured": true, 00:13:45.549 "data_offset": 0, 00:13:45.549 "data_size": 65536 00:13:45.549 }, 00:13:45.549 { 00:13:45.549 "name": "BaseBdev3", 00:13:45.549 "uuid": "7535bfae-b53c-45e4-8e67-5fd4c72b5af8", 00:13:45.549 "is_configured": true, 00:13:45.549 "data_offset": 0, 00:13:45.549 "data_size": 65536 00:13:45.549 }, 00:13:45.549 { 00:13:45.549 "name": "BaseBdev4", 00:13:45.549 "uuid": "ba07e27e-afa3-4510-83fc-f48f06dfa5a8", 00:13:45.549 "is_configured": true, 00:13:45.549 "data_offset": 0, 00:13:45.549 "data_size": 65536 00:13:45.549 } 00:13:45.549 ] 00:13:45.549 }' 00:13:45.549 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.549 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.808 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.808 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:45.808 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.808 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.808 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.808 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:45.808 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:45.808 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.808 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.808 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.808 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8cf51068-c783-430e-bde1-8132b4575aed 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.069 [2024-11-27 21:46:08.949444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:46.069 NewBaseBdev 00:13:46.069 [2024-11-27 21:46:08.949576] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:46.069 [2024-11-27 21:46:08.949590] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:13:46.069 [2024-11-27 21:46:08.949903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:46.069 [2024-11-27 21:46:08.950356] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:46.069 [2024-11-27 21:46:08.950371] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:46.069 [2024-11-27 21:46:08.950546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.069 [ 00:13:46.069 { 00:13:46.069 "name": "NewBaseBdev", 00:13:46.069 "aliases": [ 00:13:46.069 "8cf51068-c783-430e-bde1-8132b4575aed" 00:13:46.069 ], 00:13:46.069 "product_name": "Malloc disk", 00:13:46.069 "block_size": 512, 00:13:46.069 "num_blocks": 65536, 00:13:46.069 "uuid": "8cf51068-c783-430e-bde1-8132b4575aed", 00:13:46.069 "assigned_rate_limits": { 00:13:46.069 "rw_ios_per_sec": 0, 00:13:46.069 "rw_mbytes_per_sec": 0, 00:13:46.069 "r_mbytes_per_sec": 0, 00:13:46.069 "w_mbytes_per_sec": 0 00:13:46.069 }, 00:13:46.069 "claimed": true, 00:13:46.069 "claim_type": "exclusive_write", 00:13:46.069 "zoned": false, 00:13:46.069 "supported_io_types": { 00:13:46.069 "read": true, 00:13:46.069 "write": true, 00:13:46.069 "unmap": true, 00:13:46.069 "flush": true, 00:13:46.069 "reset": true, 00:13:46.069 "nvme_admin": false, 00:13:46.069 "nvme_io": false, 00:13:46.069 "nvme_io_md": false, 00:13:46.069 "write_zeroes": true, 00:13:46.069 "zcopy": true, 00:13:46.069 "get_zone_info": false, 00:13:46.069 "zone_management": false, 00:13:46.069 "zone_append": false, 00:13:46.069 "compare": false, 00:13:46.069 "compare_and_write": false, 00:13:46.069 "abort": true, 00:13:46.069 "seek_hole": false, 00:13:46.069 "seek_data": false, 00:13:46.069 "copy": true, 00:13:46.069 "nvme_iov_md": false 00:13:46.069 }, 00:13:46.069 "memory_domains": [ 00:13:46.069 { 00:13:46.069 "dma_device_id": "system", 00:13:46.069 "dma_device_type": 1 00:13:46.069 }, 00:13:46.069 { 00:13:46.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.069 "dma_device_type": 2 00:13:46.069 } 00:13:46.069 ], 00:13:46.069 "driver_specific": {} 00:13:46.069 } 00:13:46.069 ] 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.069 21:46:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.069 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.069 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.069 "name": "Existed_Raid", 00:13:46.069 "uuid": "12868a3b-64c4-48c4-83a9-47835a5f2473", 00:13:46.069 "strip_size_kb": 64, 00:13:46.069 "state": "online", 00:13:46.069 "raid_level": "raid5f", 00:13:46.069 "superblock": false, 00:13:46.069 "num_base_bdevs": 4, 00:13:46.069 "num_base_bdevs_discovered": 4, 00:13:46.069 "num_base_bdevs_operational": 4, 00:13:46.069 "base_bdevs_list": [ 00:13:46.069 { 00:13:46.069 "name": "NewBaseBdev", 00:13:46.069 "uuid": "8cf51068-c783-430e-bde1-8132b4575aed", 00:13:46.069 "is_configured": true, 00:13:46.069 "data_offset": 0, 00:13:46.069 "data_size": 65536 00:13:46.069 }, 00:13:46.069 { 00:13:46.069 "name": "BaseBdev2", 00:13:46.069 "uuid": "d71ac939-3c3b-49ae-be1e-8e3d728ae83f", 00:13:46.069 "is_configured": true, 00:13:46.069 "data_offset": 0, 00:13:46.069 "data_size": 65536 00:13:46.069 }, 00:13:46.069 { 00:13:46.069 "name": "BaseBdev3", 00:13:46.069 "uuid": "7535bfae-b53c-45e4-8e67-5fd4c72b5af8", 00:13:46.069 "is_configured": true, 00:13:46.069 "data_offset": 0, 00:13:46.069 "data_size": 65536 00:13:46.069 }, 00:13:46.069 { 00:13:46.069 "name": "BaseBdev4", 00:13:46.069 "uuid": "ba07e27e-afa3-4510-83fc-f48f06dfa5a8", 00:13:46.069 "is_configured": true, 00:13:46.069 "data_offset": 0, 00:13:46.069 "data_size": 65536 00:13:46.069 } 00:13:46.069 ] 00:13:46.069 }' 00:13:46.069 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.069 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.328 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:46.328 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:46.328 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:46.328 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:46.328 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:46.328 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:46.328 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:46.328 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.328 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.328 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:46.328 [2024-11-27 21:46:09.404912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.328 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.328 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:46.328 "name": "Existed_Raid", 00:13:46.328 "aliases": [ 00:13:46.328 "12868a3b-64c4-48c4-83a9-47835a5f2473" 00:13:46.328 ], 00:13:46.328 "product_name": "Raid Volume", 00:13:46.328 "block_size": 512, 00:13:46.328 "num_blocks": 196608, 00:13:46.328 "uuid": "12868a3b-64c4-48c4-83a9-47835a5f2473", 00:13:46.328 "assigned_rate_limits": { 00:13:46.328 "rw_ios_per_sec": 0, 00:13:46.328 "rw_mbytes_per_sec": 0, 00:13:46.328 "r_mbytes_per_sec": 0, 00:13:46.328 "w_mbytes_per_sec": 0 00:13:46.328 }, 00:13:46.328 "claimed": false, 00:13:46.328 "zoned": false, 00:13:46.328 "supported_io_types": { 00:13:46.328 "read": true, 00:13:46.328 "write": true, 00:13:46.328 "unmap": false, 00:13:46.328 "flush": false, 00:13:46.328 "reset": true, 00:13:46.328 "nvme_admin": false, 00:13:46.328 "nvme_io": false, 00:13:46.328 "nvme_io_md": false, 00:13:46.328 "write_zeroes": true, 00:13:46.328 "zcopy": false, 00:13:46.328 "get_zone_info": false, 00:13:46.328 "zone_management": false, 00:13:46.328 "zone_append": false, 00:13:46.328 "compare": false, 00:13:46.328 "compare_and_write": false, 00:13:46.328 "abort": false, 00:13:46.328 "seek_hole": false, 00:13:46.328 "seek_data": false, 00:13:46.328 "copy": false, 00:13:46.328 "nvme_iov_md": false 00:13:46.328 }, 00:13:46.328 "driver_specific": { 00:13:46.328 "raid": { 00:13:46.328 "uuid": "12868a3b-64c4-48c4-83a9-47835a5f2473", 00:13:46.328 "strip_size_kb": 64, 00:13:46.328 "state": "online", 00:13:46.328 "raid_level": "raid5f", 00:13:46.328 "superblock": false, 00:13:46.328 "num_base_bdevs": 4, 00:13:46.328 "num_base_bdevs_discovered": 4, 00:13:46.328 "num_base_bdevs_operational": 4, 00:13:46.328 "base_bdevs_list": [ 00:13:46.328 { 00:13:46.328 "name": "NewBaseBdev", 00:13:46.328 "uuid": "8cf51068-c783-430e-bde1-8132b4575aed", 00:13:46.328 "is_configured": true, 00:13:46.328 "data_offset": 0, 00:13:46.328 "data_size": 65536 00:13:46.328 }, 00:13:46.328 { 00:13:46.328 "name": "BaseBdev2", 00:13:46.328 "uuid": "d71ac939-3c3b-49ae-be1e-8e3d728ae83f", 00:13:46.328 "is_configured": true, 00:13:46.328 "data_offset": 0, 00:13:46.328 "data_size": 65536 00:13:46.328 }, 00:13:46.328 { 00:13:46.328 "name": "BaseBdev3", 00:13:46.328 "uuid": "7535bfae-b53c-45e4-8e67-5fd4c72b5af8", 00:13:46.328 "is_configured": true, 00:13:46.328 "data_offset": 0, 00:13:46.328 "data_size": 65536 00:13:46.328 }, 00:13:46.328 { 00:13:46.328 "name": "BaseBdev4", 00:13:46.328 "uuid": "ba07e27e-afa3-4510-83fc-f48f06dfa5a8", 00:13:46.328 "is_configured": true, 00:13:46.328 "data_offset": 0, 00:13:46.328 "data_size": 65536 00:13:46.328 } 00:13:46.328 ] 00:13:46.328 } 00:13:46.328 } 00:13:46.328 }' 00:13:46.328 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:46.587 BaseBdev2 00:13:46.587 BaseBdev3 00:13:46.587 BaseBdev4' 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.587 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.587 [2024-11-27 21:46:09.688205] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:46.587 [2024-11-27 21:46:09.688233] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.587 [2024-11-27 21:46:09.688306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.588 [2024-11-27 21:46:09.688566] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.588 [2024-11-27 21:46:09.688577] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:46.588 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.588 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 92918 00:13:46.588 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 92918 ']' 00:13:46.588 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 92918 00:13:46.588 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:46.588 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:46.588 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92918 00:13:46.853 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:46.853 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:46.853 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92918' 00:13:46.853 killing process with pid 92918 00:13:46.854 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 92918 00:13:46.854 [2024-11-27 21:46:09.734439] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:46.854 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 92918 00:13:46.854 [2024-11-27 21:46:09.773184] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:47.116 21:46:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:47.116 00:13:47.116 real 0m9.262s 00:13:47.116 user 0m15.863s 00:13:47.116 sys 0m1.952s 00:13:47.116 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:47.116 21:46:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.116 ************************************ 00:13:47.116 END TEST raid5f_state_function_test 00:13:47.116 ************************************ 00:13:47.116 21:46:10 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:13:47.116 21:46:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:47.116 21:46:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:47.116 21:46:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:47.116 ************************************ 00:13:47.116 START TEST raid5f_state_function_test_sb 00:13:47.116 ************************************ 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=93562 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:47.116 Process raid pid: 93562 00:13:47.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 93562' 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 93562 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 93562 ']' 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:47.116 21:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.116 [2024-11-27 21:46:10.153454] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:47.116 [2024-11-27 21:46:10.153654] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.388 [2024-11-27 21:46:10.302647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.388 [2024-11-27 21:46:10.327377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.388 [2024-11-27 21:46:10.369466] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.388 [2024-11-27 21:46:10.369550] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.996 21:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.997 [2024-11-27 21:46:10.980023] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:47.997 [2024-11-27 21:46:10.980143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:47.997 [2024-11-27 21:46:10.980158] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:47.997 [2024-11-27 21:46:10.980169] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:47.997 [2024-11-27 21:46:10.980175] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:47.997 [2024-11-27 21:46:10.980185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:47.997 [2024-11-27 21:46:10.980191] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:47.997 [2024-11-27 21:46:10.980200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.997 21:46:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:47.997 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.997 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.997 "name": "Existed_Raid", 00:13:47.997 "uuid": "91bda428-ab18-48aa-89ee-8e86942acc6a", 00:13:47.997 "strip_size_kb": 64, 00:13:47.997 "state": "configuring", 00:13:47.997 "raid_level": "raid5f", 00:13:47.997 "superblock": true, 00:13:47.997 "num_base_bdevs": 4, 00:13:47.997 "num_base_bdevs_discovered": 0, 00:13:47.997 "num_base_bdevs_operational": 4, 00:13:47.997 "base_bdevs_list": [ 00:13:47.997 { 00:13:47.997 "name": "BaseBdev1", 00:13:47.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.997 "is_configured": false, 00:13:47.997 "data_offset": 0, 00:13:47.997 "data_size": 0 00:13:47.997 }, 00:13:47.997 { 00:13:47.997 "name": "BaseBdev2", 00:13:47.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.997 "is_configured": false, 00:13:47.997 "data_offset": 0, 00:13:47.997 "data_size": 0 00:13:47.997 }, 00:13:47.997 { 00:13:47.997 "name": "BaseBdev3", 00:13:47.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.997 "is_configured": false, 00:13:47.997 "data_offset": 0, 00:13:47.997 "data_size": 0 00:13:47.997 }, 00:13:47.997 { 00:13:47.997 "name": "BaseBdev4", 00:13:47.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.997 "is_configured": false, 00:13:47.997 "data_offset": 0, 00:13:47.997 "data_size": 0 00:13:47.997 } 00:13:47.997 ] 00:13:47.997 }' 00:13:47.997 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.997 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.567 [2024-11-27 21:46:11.411179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:48.567 [2024-11-27 21:46:11.411259] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.567 [2024-11-27 21:46:11.423196] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:48.567 [2024-11-27 21:46:11.423270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:48.567 [2024-11-27 21:46:11.423296] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:48.567 [2024-11-27 21:46:11.423318] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:48.567 [2024-11-27 21:46:11.423336] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:48.567 [2024-11-27 21:46:11.423356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:48.567 [2024-11-27 21:46:11.423373] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:48.567 [2024-11-27 21:46:11.423393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.567 [2024-11-27 21:46:11.443845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.567 BaseBdev1 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:48.567 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.568 [ 00:13:48.568 { 00:13:48.568 "name": "BaseBdev1", 00:13:48.568 "aliases": [ 00:13:48.568 "95f6f628-b8b4-40de-b396-396b7940e440" 00:13:48.568 ], 00:13:48.568 "product_name": "Malloc disk", 00:13:48.568 "block_size": 512, 00:13:48.568 "num_blocks": 65536, 00:13:48.568 "uuid": "95f6f628-b8b4-40de-b396-396b7940e440", 00:13:48.568 "assigned_rate_limits": { 00:13:48.568 "rw_ios_per_sec": 0, 00:13:48.568 "rw_mbytes_per_sec": 0, 00:13:48.568 "r_mbytes_per_sec": 0, 00:13:48.568 "w_mbytes_per_sec": 0 00:13:48.568 }, 00:13:48.568 "claimed": true, 00:13:48.568 "claim_type": "exclusive_write", 00:13:48.568 "zoned": false, 00:13:48.568 "supported_io_types": { 00:13:48.568 "read": true, 00:13:48.568 "write": true, 00:13:48.568 "unmap": true, 00:13:48.568 "flush": true, 00:13:48.568 "reset": true, 00:13:48.568 "nvme_admin": false, 00:13:48.568 "nvme_io": false, 00:13:48.568 "nvme_io_md": false, 00:13:48.568 "write_zeroes": true, 00:13:48.568 "zcopy": true, 00:13:48.568 "get_zone_info": false, 00:13:48.568 "zone_management": false, 00:13:48.568 "zone_append": false, 00:13:48.568 "compare": false, 00:13:48.568 "compare_and_write": false, 00:13:48.568 "abort": true, 00:13:48.568 "seek_hole": false, 00:13:48.568 "seek_data": false, 00:13:48.568 "copy": true, 00:13:48.568 "nvme_iov_md": false 00:13:48.568 }, 00:13:48.568 "memory_domains": [ 00:13:48.568 { 00:13:48.568 "dma_device_id": "system", 00:13:48.568 "dma_device_type": 1 00:13:48.568 }, 00:13:48.568 { 00:13:48.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.568 "dma_device_type": 2 00:13:48.568 } 00:13:48.568 ], 00:13:48.568 "driver_specific": {} 00:13:48.568 } 00:13:48.568 ] 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.568 "name": "Existed_Raid", 00:13:48.568 "uuid": "d5a357fb-8e54-4616-8aa4-bb929b6043f0", 00:13:48.568 "strip_size_kb": 64, 00:13:48.568 "state": "configuring", 00:13:48.568 "raid_level": "raid5f", 00:13:48.568 "superblock": true, 00:13:48.568 "num_base_bdevs": 4, 00:13:48.568 "num_base_bdevs_discovered": 1, 00:13:48.568 "num_base_bdevs_operational": 4, 00:13:48.568 "base_bdevs_list": [ 00:13:48.568 { 00:13:48.568 "name": "BaseBdev1", 00:13:48.568 "uuid": "95f6f628-b8b4-40de-b396-396b7940e440", 00:13:48.568 "is_configured": true, 00:13:48.568 "data_offset": 2048, 00:13:48.568 "data_size": 63488 00:13:48.568 }, 00:13:48.568 { 00:13:48.568 "name": "BaseBdev2", 00:13:48.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.568 "is_configured": false, 00:13:48.568 "data_offset": 0, 00:13:48.568 "data_size": 0 00:13:48.568 }, 00:13:48.568 { 00:13:48.568 "name": "BaseBdev3", 00:13:48.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.568 "is_configured": false, 00:13:48.568 "data_offset": 0, 00:13:48.568 "data_size": 0 00:13:48.568 }, 00:13:48.568 { 00:13:48.568 "name": "BaseBdev4", 00:13:48.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.568 "is_configured": false, 00:13:48.568 "data_offset": 0, 00:13:48.568 "data_size": 0 00:13:48.568 } 00:13:48.568 ] 00:13:48.568 }' 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.568 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.828 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:48.828 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.828 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.828 [2024-11-27 21:46:11.907050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:48.828 [2024-11-27 21:46:11.907092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:13:48.828 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.828 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:48.828 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.828 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:48.828 [2024-11-27 21:46:11.919073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:48.828 [2024-11-27 21:46:11.920908] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:48.828 [2024-11-27 21:46:11.920942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:48.828 [2024-11-27 21:46:11.920951] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:48.828 [2024-11-27 21:46:11.920959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:48.828 [2024-11-27 21:46:11.920966] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:48.828 [2024-11-27 21:46:11.920973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:48.828 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.828 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:48.828 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:48.829 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:48.829 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:48.829 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:48.829 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:48.829 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:48.829 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.829 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.829 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.829 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.829 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.829 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.829 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:48.829 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.829 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.089 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.089 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.089 "name": "Existed_Raid", 00:13:49.089 "uuid": "05293fc8-e488-4aa8-bdaf-3512a1d49bf2", 00:13:49.089 "strip_size_kb": 64, 00:13:49.089 "state": "configuring", 00:13:49.089 "raid_level": "raid5f", 00:13:49.089 "superblock": true, 00:13:49.089 "num_base_bdevs": 4, 00:13:49.089 "num_base_bdevs_discovered": 1, 00:13:49.089 "num_base_bdevs_operational": 4, 00:13:49.089 "base_bdevs_list": [ 00:13:49.089 { 00:13:49.089 "name": "BaseBdev1", 00:13:49.089 "uuid": "95f6f628-b8b4-40de-b396-396b7940e440", 00:13:49.089 "is_configured": true, 00:13:49.089 "data_offset": 2048, 00:13:49.089 "data_size": 63488 00:13:49.089 }, 00:13:49.089 { 00:13:49.089 "name": "BaseBdev2", 00:13:49.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.089 "is_configured": false, 00:13:49.089 "data_offset": 0, 00:13:49.089 "data_size": 0 00:13:49.089 }, 00:13:49.089 { 00:13:49.089 "name": "BaseBdev3", 00:13:49.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.089 "is_configured": false, 00:13:49.089 "data_offset": 0, 00:13:49.089 "data_size": 0 00:13:49.089 }, 00:13:49.089 { 00:13:49.089 "name": "BaseBdev4", 00:13:49.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.089 "is_configured": false, 00:13:49.089 "data_offset": 0, 00:13:49.089 "data_size": 0 00:13:49.089 } 00:13:49.089 ] 00:13:49.089 }' 00:13:49.089 21:46:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.089 21:46:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.349 [2024-11-27 21:46:12.393004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:49.349 BaseBdev2 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.349 [ 00:13:49.349 { 00:13:49.349 "name": "BaseBdev2", 00:13:49.349 "aliases": [ 00:13:49.349 "489c2686-adf4-4bbd-b50a-a016a36628e2" 00:13:49.349 ], 00:13:49.349 "product_name": "Malloc disk", 00:13:49.349 "block_size": 512, 00:13:49.349 "num_blocks": 65536, 00:13:49.349 "uuid": "489c2686-adf4-4bbd-b50a-a016a36628e2", 00:13:49.349 "assigned_rate_limits": { 00:13:49.349 "rw_ios_per_sec": 0, 00:13:49.349 "rw_mbytes_per_sec": 0, 00:13:49.349 "r_mbytes_per_sec": 0, 00:13:49.349 "w_mbytes_per_sec": 0 00:13:49.349 }, 00:13:49.349 "claimed": true, 00:13:49.349 "claim_type": "exclusive_write", 00:13:49.349 "zoned": false, 00:13:49.349 "supported_io_types": { 00:13:49.349 "read": true, 00:13:49.349 "write": true, 00:13:49.349 "unmap": true, 00:13:49.349 "flush": true, 00:13:49.349 "reset": true, 00:13:49.349 "nvme_admin": false, 00:13:49.349 "nvme_io": false, 00:13:49.349 "nvme_io_md": false, 00:13:49.349 "write_zeroes": true, 00:13:49.349 "zcopy": true, 00:13:49.349 "get_zone_info": false, 00:13:49.349 "zone_management": false, 00:13:49.349 "zone_append": false, 00:13:49.349 "compare": false, 00:13:49.349 "compare_and_write": false, 00:13:49.349 "abort": true, 00:13:49.349 "seek_hole": false, 00:13:49.349 "seek_data": false, 00:13:49.349 "copy": true, 00:13:49.349 "nvme_iov_md": false 00:13:49.349 }, 00:13:49.349 "memory_domains": [ 00:13:49.349 { 00:13:49.349 "dma_device_id": "system", 00:13:49.349 "dma_device_type": 1 00:13:49.349 }, 00:13:49.349 { 00:13:49.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.349 "dma_device_type": 2 00:13:49.349 } 00:13:49.349 ], 00:13:49.349 "driver_specific": {} 00:13:49.349 } 00:13:49.349 ] 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.349 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.350 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.350 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.350 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.350 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.350 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.350 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.350 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.610 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.610 "name": "Existed_Raid", 00:13:49.610 "uuid": "05293fc8-e488-4aa8-bdaf-3512a1d49bf2", 00:13:49.610 "strip_size_kb": 64, 00:13:49.610 "state": "configuring", 00:13:49.610 "raid_level": "raid5f", 00:13:49.610 "superblock": true, 00:13:49.610 "num_base_bdevs": 4, 00:13:49.610 "num_base_bdevs_discovered": 2, 00:13:49.610 "num_base_bdevs_operational": 4, 00:13:49.610 "base_bdevs_list": [ 00:13:49.610 { 00:13:49.610 "name": "BaseBdev1", 00:13:49.610 "uuid": "95f6f628-b8b4-40de-b396-396b7940e440", 00:13:49.610 "is_configured": true, 00:13:49.610 "data_offset": 2048, 00:13:49.610 "data_size": 63488 00:13:49.610 }, 00:13:49.610 { 00:13:49.610 "name": "BaseBdev2", 00:13:49.610 "uuid": "489c2686-adf4-4bbd-b50a-a016a36628e2", 00:13:49.610 "is_configured": true, 00:13:49.610 "data_offset": 2048, 00:13:49.610 "data_size": 63488 00:13:49.610 }, 00:13:49.610 { 00:13:49.610 "name": "BaseBdev3", 00:13:49.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.610 "is_configured": false, 00:13:49.610 "data_offset": 0, 00:13:49.610 "data_size": 0 00:13:49.610 }, 00:13:49.610 { 00:13:49.610 "name": "BaseBdev4", 00:13:49.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.610 "is_configured": false, 00:13:49.610 "data_offset": 0, 00:13:49.610 "data_size": 0 00:13:49.610 } 00:13:49.610 ] 00:13:49.610 }' 00:13:49.610 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.610 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.870 [2024-11-27 21:46:12.863314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:49.870 BaseBdev3 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.870 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.870 [ 00:13:49.870 { 00:13:49.870 "name": "BaseBdev3", 00:13:49.870 "aliases": [ 00:13:49.870 "9fea2a4d-7b0d-4200-bd5d-491b6c3779dd" 00:13:49.870 ], 00:13:49.870 "product_name": "Malloc disk", 00:13:49.870 "block_size": 512, 00:13:49.870 "num_blocks": 65536, 00:13:49.870 "uuid": "9fea2a4d-7b0d-4200-bd5d-491b6c3779dd", 00:13:49.870 "assigned_rate_limits": { 00:13:49.870 "rw_ios_per_sec": 0, 00:13:49.870 "rw_mbytes_per_sec": 0, 00:13:49.870 "r_mbytes_per_sec": 0, 00:13:49.870 "w_mbytes_per_sec": 0 00:13:49.870 }, 00:13:49.870 "claimed": true, 00:13:49.870 "claim_type": "exclusive_write", 00:13:49.870 "zoned": false, 00:13:49.870 "supported_io_types": { 00:13:49.870 "read": true, 00:13:49.870 "write": true, 00:13:49.870 "unmap": true, 00:13:49.870 "flush": true, 00:13:49.870 "reset": true, 00:13:49.870 "nvme_admin": false, 00:13:49.870 "nvme_io": false, 00:13:49.870 "nvme_io_md": false, 00:13:49.870 "write_zeroes": true, 00:13:49.870 "zcopy": true, 00:13:49.870 "get_zone_info": false, 00:13:49.870 "zone_management": false, 00:13:49.870 "zone_append": false, 00:13:49.870 "compare": false, 00:13:49.871 "compare_and_write": false, 00:13:49.871 "abort": true, 00:13:49.871 "seek_hole": false, 00:13:49.871 "seek_data": false, 00:13:49.871 "copy": true, 00:13:49.871 "nvme_iov_md": false 00:13:49.871 }, 00:13:49.871 "memory_domains": [ 00:13:49.871 { 00:13:49.871 "dma_device_id": "system", 00:13:49.871 "dma_device_type": 1 00:13:49.871 }, 00:13:49.871 { 00:13:49.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:49.871 "dma_device_type": 2 00:13:49.871 } 00:13:49.871 ], 00:13:49.871 "driver_specific": {} 00:13:49.871 } 00:13:49.871 ] 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.871 "name": "Existed_Raid", 00:13:49.871 "uuid": "05293fc8-e488-4aa8-bdaf-3512a1d49bf2", 00:13:49.871 "strip_size_kb": 64, 00:13:49.871 "state": "configuring", 00:13:49.871 "raid_level": "raid5f", 00:13:49.871 "superblock": true, 00:13:49.871 "num_base_bdevs": 4, 00:13:49.871 "num_base_bdevs_discovered": 3, 00:13:49.871 "num_base_bdevs_operational": 4, 00:13:49.871 "base_bdevs_list": [ 00:13:49.871 { 00:13:49.871 "name": "BaseBdev1", 00:13:49.871 "uuid": "95f6f628-b8b4-40de-b396-396b7940e440", 00:13:49.871 "is_configured": true, 00:13:49.871 "data_offset": 2048, 00:13:49.871 "data_size": 63488 00:13:49.871 }, 00:13:49.871 { 00:13:49.871 "name": "BaseBdev2", 00:13:49.871 "uuid": "489c2686-adf4-4bbd-b50a-a016a36628e2", 00:13:49.871 "is_configured": true, 00:13:49.871 "data_offset": 2048, 00:13:49.871 "data_size": 63488 00:13:49.871 }, 00:13:49.871 { 00:13:49.871 "name": "BaseBdev3", 00:13:49.871 "uuid": "9fea2a4d-7b0d-4200-bd5d-491b6c3779dd", 00:13:49.871 "is_configured": true, 00:13:49.871 "data_offset": 2048, 00:13:49.871 "data_size": 63488 00:13:49.871 }, 00:13:49.871 { 00:13:49.871 "name": "BaseBdev4", 00:13:49.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.871 "is_configured": false, 00:13:49.871 "data_offset": 0, 00:13:49.871 "data_size": 0 00:13:49.871 } 00:13:49.871 ] 00:13:49.871 }' 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.871 21:46:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.441 [2024-11-27 21:46:13.337215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:50.441 [2024-11-27 21:46:13.337555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:50.441 [2024-11-27 21:46:13.337612] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:50.441 BaseBdev4 00:13:50.441 [2024-11-27 21:46:13.337931] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:50.441 [2024-11-27 21:46:13.338415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:50.441 [2024-11-27 21:46:13.338474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:13:50.441 [2024-11-27 21:46:13.338676] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.441 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.441 [ 00:13:50.441 { 00:13:50.441 "name": "BaseBdev4", 00:13:50.441 "aliases": [ 00:13:50.441 "cc526bcb-a1ce-46e0-86fe-ff9aeece4f48" 00:13:50.441 ], 00:13:50.441 "product_name": "Malloc disk", 00:13:50.441 "block_size": 512, 00:13:50.441 "num_blocks": 65536, 00:13:50.441 "uuid": "cc526bcb-a1ce-46e0-86fe-ff9aeece4f48", 00:13:50.441 "assigned_rate_limits": { 00:13:50.441 "rw_ios_per_sec": 0, 00:13:50.441 "rw_mbytes_per_sec": 0, 00:13:50.441 "r_mbytes_per_sec": 0, 00:13:50.441 "w_mbytes_per_sec": 0 00:13:50.441 }, 00:13:50.441 "claimed": true, 00:13:50.441 "claim_type": "exclusive_write", 00:13:50.441 "zoned": false, 00:13:50.441 "supported_io_types": { 00:13:50.441 "read": true, 00:13:50.441 "write": true, 00:13:50.441 "unmap": true, 00:13:50.441 "flush": true, 00:13:50.441 "reset": true, 00:13:50.441 "nvme_admin": false, 00:13:50.441 "nvme_io": false, 00:13:50.441 "nvme_io_md": false, 00:13:50.441 "write_zeroes": true, 00:13:50.441 "zcopy": true, 00:13:50.441 "get_zone_info": false, 00:13:50.441 "zone_management": false, 00:13:50.441 "zone_append": false, 00:13:50.441 "compare": false, 00:13:50.441 "compare_and_write": false, 00:13:50.441 "abort": true, 00:13:50.441 "seek_hole": false, 00:13:50.441 "seek_data": false, 00:13:50.441 "copy": true, 00:13:50.441 "nvme_iov_md": false 00:13:50.441 }, 00:13:50.441 "memory_domains": [ 00:13:50.441 { 00:13:50.441 "dma_device_id": "system", 00:13:50.442 "dma_device_type": 1 00:13:50.442 }, 00:13:50.442 { 00:13:50.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.442 "dma_device_type": 2 00:13:50.442 } 00:13:50.442 ], 00:13:50.442 "driver_specific": {} 00:13:50.442 } 00:13:50.442 ] 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.442 "name": "Existed_Raid", 00:13:50.442 "uuid": "05293fc8-e488-4aa8-bdaf-3512a1d49bf2", 00:13:50.442 "strip_size_kb": 64, 00:13:50.442 "state": "online", 00:13:50.442 "raid_level": "raid5f", 00:13:50.442 "superblock": true, 00:13:50.442 "num_base_bdevs": 4, 00:13:50.442 "num_base_bdevs_discovered": 4, 00:13:50.442 "num_base_bdevs_operational": 4, 00:13:50.442 "base_bdevs_list": [ 00:13:50.442 { 00:13:50.442 "name": "BaseBdev1", 00:13:50.442 "uuid": "95f6f628-b8b4-40de-b396-396b7940e440", 00:13:50.442 "is_configured": true, 00:13:50.442 "data_offset": 2048, 00:13:50.442 "data_size": 63488 00:13:50.442 }, 00:13:50.442 { 00:13:50.442 "name": "BaseBdev2", 00:13:50.442 "uuid": "489c2686-adf4-4bbd-b50a-a016a36628e2", 00:13:50.442 "is_configured": true, 00:13:50.442 "data_offset": 2048, 00:13:50.442 "data_size": 63488 00:13:50.442 }, 00:13:50.442 { 00:13:50.442 "name": "BaseBdev3", 00:13:50.442 "uuid": "9fea2a4d-7b0d-4200-bd5d-491b6c3779dd", 00:13:50.442 "is_configured": true, 00:13:50.442 "data_offset": 2048, 00:13:50.442 "data_size": 63488 00:13:50.442 }, 00:13:50.442 { 00:13:50.442 "name": "BaseBdev4", 00:13:50.442 "uuid": "cc526bcb-a1ce-46e0-86fe-ff9aeece4f48", 00:13:50.442 "is_configured": true, 00:13:50.442 "data_offset": 2048, 00:13:50.442 "data_size": 63488 00:13:50.442 } 00:13:50.442 ] 00:13:50.442 }' 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.442 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.011 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:51.011 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:51.011 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:51.011 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:51.011 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:51.011 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:51.011 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:51.011 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:51.011 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.011 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.011 [2024-11-27 21:46:13.840605] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.011 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.011 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:51.011 "name": "Existed_Raid", 00:13:51.011 "aliases": [ 00:13:51.011 "05293fc8-e488-4aa8-bdaf-3512a1d49bf2" 00:13:51.011 ], 00:13:51.011 "product_name": "Raid Volume", 00:13:51.011 "block_size": 512, 00:13:51.011 "num_blocks": 190464, 00:13:51.011 "uuid": "05293fc8-e488-4aa8-bdaf-3512a1d49bf2", 00:13:51.011 "assigned_rate_limits": { 00:13:51.011 "rw_ios_per_sec": 0, 00:13:51.011 "rw_mbytes_per_sec": 0, 00:13:51.011 "r_mbytes_per_sec": 0, 00:13:51.011 "w_mbytes_per_sec": 0 00:13:51.011 }, 00:13:51.011 "claimed": false, 00:13:51.011 "zoned": false, 00:13:51.011 "supported_io_types": { 00:13:51.011 "read": true, 00:13:51.011 "write": true, 00:13:51.011 "unmap": false, 00:13:51.011 "flush": false, 00:13:51.011 "reset": true, 00:13:51.011 "nvme_admin": false, 00:13:51.011 "nvme_io": false, 00:13:51.011 "nvme_io_md": false, 00:13:51.011 "write_zeroes": true, 00:13:51.011 "zcopy": false, 00:13:51.011 "get_zone_info": false, 00:13:51.011 "zone_management": false, 00:13:51.011 "zone_append": false, 00:13:51.011 "compare": false, 00:13:51.011 "compare_and_write": false, 00:13:51.011 "abort": false, 00:13:51.011 "seek_hole": false, 00:13:51.011 "seek_data": false, 00:13:51.011 "copy": false, 00:13:51.011 "nvme_iov_md": false 00:13:51.011 }, 00:13:51.011 "driver_specific": { 00:13:51.011 "raid": { 00:13:51.011 "uuid": "05293fc8-e488-4aa8-bdaf-3512a1d49bf2", 00:13:51.011 "strip_size_kb": 64, 00:13:51.011 "state": "online", 00:13:51.011 "raid_level": "raid5f", 00:13:51.011 "superblock": true, 00:13:51.011 "num_base_bdevs": 4, 00:13:51.011 "num_base_bdevs_discovered": 4, 00:13:51.011 "num_base_bdevs_operational": 4, 00:13:51.011 "base_bdevs_list": [ 00:13:51.011 { 00:13:51.011 "name": "BaseBdev1", 00:13:51.011 "uuid": "95f6f628-b8b4-40de-b396-396b7940e440", 00:13:51.011 "is_configured": true, 00:13:51.011 "data_offset": 2048, 00:13:51.011 "data_size": 63488 00:13:51.011 }, 00:13:51.011 { 00:13:51.011 "name": "BaseBdev2", 00:13:51.011 "uuid": "489c2686-adf4-4bbd-b50a-a016a36628e2", 00:13:51.011 "is_configured": true, 00:13:51.011 "data_offset": 2048, 00:13:51.011 "data_size": 63488 00:13:51.011 }, 00:13:51.011 { 00:13:51.011 "name": "BaseBdev3", 00:13:51.011 "uuid": "9fea2a4d-7b0d-4200-bd5d-491b6c3779dd", 00:13:51.011 "is_configured": true, 00:13:51.011 "data_offset": 2048, 00:13:51.011 "data_size": 63488 00:13:51.011 }, 00:13:51.011 { 00:13:51.011 "name": "BaseBdev4", 00:13:51.011 "uuid": "cc526bcb-a1ce-46e0-86fe-ff9aeece4f48", 00:13:51.011 "is_configured": true, 00:13:51.011 "data_offset": 2048, 00:13:51.011 "data_size": 63488 00:13:51.011 } 00:13:51.011 ] 00:13:51.011 } 00:13:51.011 } 00:13:51.011 }' 00:13:51.012 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:51.012 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:51.012 BaseBdev2 00:13:51.012 BaseBdev3 00:13:51.012 BaseBdev4' 00:13:51.012 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.012 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:51.012 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.012 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:51.012 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.012 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.012 21:46:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.012 21:46:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.012 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.272 [2024-11-27 21:46:14.147937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.272 "name": "Existed_Raid", 00:13:51.272 "uuid": "05293fc8-e488-4aa8-bdaf-3512a1d49bf2", 00:13:51.272 "strip_size_kb": 64, 00:13:51.272 "state": "online", 00:13:51.272 "raid_level": "raid5f", 00:13:51.272 "superblock": true, 00:13:51.272 "num_base_bdevs": 4, 00:13:51.272 "num_base_bdevs_discovered": 3, 00:13:51.272 "num_base_bdevs_operational": 3, 00:13:51.272 "base_bdevs_list": [ 00:13:51.272 { 00:13:51.272 "name": null, 00:13:51.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.272 "is_configured": false, 00:13:51.272 "data_offset": 0, 00:13:51.272 "data_size": 63488 00:13:51.272 }, 00:13:51.272 { 00:13:51.272 "name": "BaseBdev2", 00:13:51.272 "uuid": "489c2686-adf4-4bbd-b50a-a016a36628e2", 00:13:51.272 "is_configured": true, 00:13:51.272 "data_offset": 2048, 00:13:51.272 "data_size": 63488 00:13:51.272 }, 00:13:51.272 { 00:13:51.272 "name": "BaseBdev3", 00:13:51.272 "uuid": "9fea2a4d-7b0d-4200-bd5d-491b6c3779dd", 00:13:51.272 "is_configured": true, 00:13:51.272 "data_offset": 2048, 00:13:51.272 "data_size": 63488 00:13:51.272 }, 00:13:51.272 { 00:13:51.272 "name": "BaseBdev4", 00:13:51.272 "uuid": "cc526bcb-a1ce-46e0-86fe-ff9aeece4f48", 00:13:51.272 "is_configured": true, 00:13:51.272 "data_offset": 2048, 00:13:51.272 "data_size": 63488 00:13:51.272 } 00:13:51.272 ] 00:13:51.272 }' 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.272 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.532 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:51.532 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:51.532 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:51.532 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.532 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.532 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.532 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.532 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:51.532 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:51.532 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:51.532 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.532 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.532 [2024-11-27 21:46:14.642225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:51.532 [2024-11-27 21:46:14.642364] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.792 [2024-11-27 21:46:14.653251] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.792 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.793 [2024-11-27 21:46:14.713179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.793 [2024-11-27 21:46:14.780109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:51.793 [2024-11-27 21:46:14.780154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.793 BaseBdev2 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.793 [ 00:13:51.793 { 00:13:51.793 "name": "BaseBdev2", 00:13:51.793 "aliases": [ 00:13:51.793 "25940539-33ea-4c7a-8a7f-20313b1fb5a7" 00:13:51.793 ], 00:13:51.793 "product_name": "Malloc disk", 00:13:51.793 "block_size": 512, 00:13:51.793 "num_blocks": 65536, 00:13:51.793 "uuid": "25940539-33ea-4c7a-8a7f-20313b1fb5a7", 00:13:51.793 "assigned_rate_limits": { 00:13:51.793 "rw_ios_per_sec": 0, 00:13:51.793 "rw_mbytes_per_sec": 0, 00:13:51.793 "r_mbytes_per_sec": 0, 00:13:51.793 "w_mbytes_per_sec": 0 00:13:51.793 }, 00:13:51.793 "claimed": false, 00:13:51.793 "zoned": false, 00:13:51.793 "supported_io_types": { 00:13:51.793 "read": true, 00:13:51.793 "write": true, 00:13:51.793 "unmap": true, 00:13:51.793 "flush": true, 00:13:51.793 "reset": true, 00:13:51.793 "nvme_admin": false, 00:13:51.793 "nvme_io": false, 00:13:51.793 "nvme_io_md": false, 00:13:51.793 "write_zeroes": true, 00:13:51.793 "zcopy": true, 00:13:51.793 "get_zone_info": false, 00:13:51.793 "zone_management": false, 00:13:51.793 "zone_append": false, 00:13:51.793 "compare": false, 00:13:51.793 "compare_and_write": false, 00:13:51.793 "abort": true, 00:13:51.793 "seek_hole": false, 00:13:51.793 "seek_data": false, 00:13:51.793 "copy": true, 00:13:51.793 "nvme_iov_md": false 00:13:51.793 }, 00:13:51.793 "memory_domains": [ 00:13:51.793 { 00:13:51.793 "dma_device_id": "system", 00:13:51.793 "dma_device_type": 1 00:13:51.793 }, 00:13:51.793 { 00:13:51.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.793 "dma_device_type": 2 00:13:51.793 } 00:13:51.793 ], 00:13:51.793 "driver_specific": {} 00:13:51.793 } 00:13:51.793 ] 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.793 BaseBdev3 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.793 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.054 [ 00:13:52.054 { 00:13:52.054 "name": "BaseBdev3", 00:13:52.054 "aliases": [ 00:13:52.054 "83e5bb5d-f2c1-42dd-ba08-fdc3a833e909" 00:13:52.054 ], 00:13:52.054 "product_name": "Malloc disk", 00:13:52.054 "block_size": 512, 00:13:52.054 "num_blocks": 65536, 00:13:52.054 "uuid": "83e5bb5d-f2c1-42dd-ba08-fdc3a833e909", 00:13:52.054 "assigned_rate_limits": { 00:13:52.054 "rw_ios_per_sec": 0, 00:13:52.054 "rw_mbytes_per_sec": 0, 00:13:52.054 "r_mbytes_per_sec": 0, 00:13:52.054 "w_mbytes_per_sec": 0 00:13:52.054 }, 00:13:52.054 "claimed": false, 00:13:52.054 "zoned": false, 00:13:52.054 "supported_io_types": { 00:13:52.054 "read": true, 00:13:52.054 "write": true, 00:13:52.054 "unmap": true, 00:13:52.054 "flush": true, 00:13:52.054 "reset": true, 00:13:52.054 "nvme_admin": false, 00:13:52.054 "nvme_io": false, 00:13:52.054 "nvme_io_md": false, 00:13:52.054 "write_zeroes": true, 00:13:52.054 "zcopy": true, 00:13:52.054 "get_zone_info": false, 00:13:52.054 "zone_management": false, 00:13:52.054 "zone_append": false, 00:13:52.054 "compare": false, 00:13:52.054 "compare_and_write": false, 00:13:52.054 "abort": true, 00:13:52.054 "seek_hole": false, 00:13:52.054 "seek_data": false, 00:13:52.054 "copy": true, 00:13:52.054 "nvme_iov_md": false 00:13:52.054 }, 00:13:52.054 "memory_domains": [ 00:13:52.054 { 00:13:52.054 "dma_device_id": "system", 00:13:52.054 "dma_device_type": 1 00:13:52.054 }, 00:13:52.054 { 00:13:52.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.054 "dma_device_type": 2 00:13:52.054 } 00:13:52.054 ], 00:13:52.054 "driver_specific": {} 00:13:52.054 } 00:13:52.054 ] 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.054 BaseBdev4 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.054 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.055 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:52.055 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.055 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.055 [ 00:13:52.055 { 00:13:52.055 "name": "BaseBdev4", 00:13:52.055 "aliases": [ 00:13:52.055 "cf5dbd4d-4e63-4b78-a35e-979a06ea5ad4" 00:13:52.055 ], 00:13:52.055 "product_name": "Malloc disk", 00:13:52.055 "block_size": 512, 00:13:52.055 "num_blocks": 65536, 00:13:52.055 "uuid": "cf5dbd4d-4e63-4b78-a35e-979a06ea5ad4", 00:13:52.055 "assigned_rate_limits": { 00:13:52.055 "rw_ios_per_sec": 0, 00:13:52.055 "rw_mbytes_per_sec": 0, 00:13:52.055 "r_mbytes_per_sec": 0, 00:13:52.055 "w_mbytes_per_sec": 0 00:13:52.055 }, 00:13:52.055 "claimed": false, 00:13:52.055 "zoned": false, 00:13:52.055 "supported_io_types": { 00:13:52.055 "read": true, 00:13:52.055 "write": true, 00:13:52.055 "unmap": true, 00:13:52.055 "flush": true, 00:13:52.055 "reset": true, 00:13:52.055 "nvme_admin": false, 00:13:52.055 "nvme_io": false, 00:13:52.055 "nvme_io_md": false, 00:13:52.055 "write_zeroes": true, 00:13:52.055 "zcopy": true, 00:13:52.055 "get_zone_info": false, 00:13:52.055 "zone_management": false, 00:13:52.055 "zone_append": false, 00:13:52.055 "compare": false, 00:13:52.055 "compare_and_write": false, 00:13:52.055 "abort": true, 00:13:52.055 "seek_hole": false, 00:13:52.055 "seek_data": false, 00:13:52.055 "copy": true, 00:13:52.055 "nvme_iov_md": false 00:13:52.055 }, 00:13:52.055 "memory_domains": [ 00:13:52.055 { 00:13:52.055 "dma_device_id": "system", 00:13:52.055 "dma_device_type": 1 00:13:52.055 }, 00:13:52.055 { 00:13:52.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.055 "dma_device_type": 2 00:13:52.055 } 00:13:52.055 ], 00:13:52.055 "driver_specific": {} 00:13:52.055 } 00:13:52.055 ] 00:13:52.055 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.055 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:52.055 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:52.055 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:52.055 21:46:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:52.055 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.055 21:46:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.055 [2024-11-27 21:46:14.999515] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:52.055 [2024-11-27 21:46:14.999555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:52.055 [2024-11-27 21:46:14.999597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.055 [2024-11-27 21:46:15.001400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.055 [2024-11-27 21:46:15.001445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.055 "name": "Existed_Raid", 00:13:52.055 "uuid": "61972d7f-5117-42c4-b04b-a71e5267fa49", 00:13:52.055 "strip_size_kb": 64, 00:13:52.055 "state": "configuring", 00:13:52.055 "raid_level": "raid5f", 00:13:52.055 "superblock": true, 00:13:52.055 "num_base_bdevs": 4, 00:13:52.055 "num_base_bdevs_discovered": 3, 00:13:52.055 "num_base_bdevs_operational": 4, 00:13:52.055 "base_bdevs_list": [ 00:13:52.055 { 00:13:52.055 "name": "BaseBdev1", 00:13:52.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.055 "is_configured": false, 00:13:52.055 "data_offset": 0, 00:13:52.055 "data_size": 0 00:13:52.055 }, 00:13:52.055 { 00:13:52.055 "name": "BaseBdev2", 00:13:52.055 "uuid": "25940539-33ea-4c7a-8a7f-20313b1fb5a7", 00:13:52.055 "is_configured": true, 00:13:52.055 "data_offset": 2048, 00:13:52.055 "data_size": 63488 00:13:52.055 }, 00:13:52.055 { 00:13:52.055 "name": "BaseBdev3", 00:13:52.055 "uuid": "83e5bb5d-f2c1-42dd-ba08-fdc3a833e909", 00:13:52.055 "is_configured": true, 00:13:52.055 "data_offset": 2048, 00:13:52.055 "data_size": 63488 00:13:52.055 }, 00:13:52.055 { 00:13:52.055 "name": "BaseBdev4", 00:13:52.055 "uuid": "cf5dbd4d-4e63-4b78-a35e-979a06ea5ad4", 00:13:52.055 "is_configured": true, 00:13:52.055 "data_offset": 2048, 00:13:52.055 "data_size": 63488 00:13:52.055 } 00:13:52.055 ] 00:13:52.055 }' 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.055 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.623 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.624 [2024-11-27 21:46:15.470712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.624 "name": "Existed_Raid", 00:13:52.624 "uuid": "61972d7f-5117-42c4-b04b-a71e5267fa49", 00:13:52.624 "strip_size_kb": 64, 00:13:52.624 "state": "configuring", 00:13:52.624 "raid_level": "raid5f", 00:13:52.624 "superblock": true, 00:13:52.624 "num_base_bdevs": 4, 00:13:52.624 "num_base_bdevs_discovered": 2, 00:13:52.624 "num_base_bdevs_operational": 4, 00:13:52.624 "base_bdevs_list": [ 00:13:52.624 { 00:13:52.624 "name": "BaseBdev1", 00:13:52.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.624 "is_configured": false, 00:13:52.624 "data_offset": 0, 00:13:52.624 "data_size": 0 00:13:52.624 }, 00:13:52.624 { 00:13:52.624 "name": null, 00:13:52.624 "uuid": "25940539-33ea-4c7a-8a7f-20313b1fb5a7", 00:13:52.624 "is_configured": false, 00:13:52.624 "data_offset": 0, 00:13:52.624 "data_size": 63488 00:13:52.624 }, 00:13:52.624 { 00:13:52.624 "name": "BaseBdev3", 00:13:52.624 "uuid": "83e5bb5d-f2c1-42dd-ba08-fdc3a833e909", 00:13:52.624 "is_configured": true, 00:13:52.624 "data_offset": 2048, 00:13:52.624 "data_size": 63488 00:13:52.624 }, 00:13:52.624 { 00:13:52.624 "name": "BaseBdev4", 00:13:52.624 "uuid": "cf5dbd4d-4e63-4b78-a35e-979a06ea5ad4", 00:13:52.624 "is_configured": true, 00:13:52.624 "data_offset": 2048, 00:13:52.624 "data_size": 63488 00:13:52.624 } 00:13:52.624 ] 00:13:52.624 }' 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.624 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.885 [2024-11-27 21:46:15.904785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.885 BaseBdev1 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.885 [ 00:13:52.885 { 00:13:52.885 "name": "BaseBdev1", 00:13:52.885 "aliases": [ 00:13:52.885 "41db51fe-9e9f-4320-8fcc-6262d63545c6" 00:13:52.885 ], 00:13:52.885 "product_name": "Malloc disk", 00:13:52.885 "block_size": 512, 00:13:52.885 "num_blocks": 65536, 00:13:52.885 "uuid": "41db51fe-9e9f-4320-8fcc-6262d63545c6", 00:13:52.885 "assigned_rate_limits": { 00:13:52.885 "rw_ios_per_sec": 0, 00:13:52.885 "rw_mbytes_per_sec": 0, 00:13:52.885 "r_mbytes_per_sec": 0, 00:13:52.885 "w_mbytes_per_sec": 0 00:13:52.885 }, 00:13:52.885 "claimed": true, 00:13:52.885 "claim_type": "exclusive_write", 00:13:52.885 "zoned": false, 00:13:52.885 "supported_io_types": { 00:13:52.885 "read": true, 00:13:52.885 "write": true, 00:13:52.885 "unmap": true, 00:13:52.885 "flush": true, 00:13:52.885 "reset": true, 00:13:52.885 "nvme_admin": false, 00:13:52.885 "nvme_io": false, 00:13:52.885 "nvme_io_md": false, 00:13:52.885 "write_zeroes": true, 00:13:52.885 "zcopy": true, 00:13:52.885 "get_zone_info": false, 00:13:52.885 "zone_management": false, 00:13:52.885 "zone_append": false, 00:13:52.885 "compare": false, 00:13:52.885 "compare_and_write": false, 00:13:52.885 "abort": true, 00:13:52.885 "seek_hole": false, 00:13:52.885 "seek_data": false, 00:13:52.885 "copy": true, 00:13:52.885 "nvme_iov_md": false 00:13:52.885 }, 00:13:52.885 "memory_domains": [ 00:13:52.885 { 00:13:52.885 "dma_device_id": "system", 00:13:52.885 "dma_device_type": 1 00:13:52.885 }, 00:13:52.885 { 00:13:52.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.885 "dma_device_type": 2 00:13:52.885 } 00:13:52.885 ], 00:13:52.885 "driver_specific": {} 00:13:52.885 } 00:13:52.885 ] 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.885 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.885 "name": "Existed_Raid", 00:13:52.885 "uuid": "61972d7f-5117-42c4-b04b-a71e5267fa49", 00:13:52.885 "strip_size_kb": 64, 00:13:52.885 "state": "configuring", 00:13:52.885 "raid_level": "raid5f", 00:13:52.885 "superblock": true, 00:13:52.885 "num_base_bdevs": 4, 00:13:52.885 "num_base_bdevs_discovered": 3, 00:13:52.885 "num_base_bdevs_operational": 4, 00:13:52.885 "base_bdevs_list": [ 00:13:52.885 { 00:13:52.885 "name": "BaseBdev1", 00:13:52.885 "uuid": "41db51fe-9e9f-4320-8fcc-6262d63545c6", 00:13:52.885 "is_configured": true, 00:13:52.885 "data_offset": 2048, 00:13:52.885 "data_size": 63488 00:13:52.885 }, 00:13:52.885 { 00:13:52.885 "name": null, 00:13:52.885 "uuid": "25940539-33ea-4c7a-8a7f-20313b1fb5a7", 00:13:52.885 "is_configured": false, 00:13:52.885 "data_offset": 0, 00:13:52.885 "data_size": 63488 00:13:52.885 }, 00:13:52.885 { 00:13:52.885 "name": "BaseBdev3", 00:13:52.885 "uuid": "83e5bb5d-f2c1-42dd-ba08-fdc3a833e909", 00:13:52.885 "is_configured": true, 00:13:52.885 "data_offset": 2048, 00:13:52.885 "data_size": 63488 00:13:52.885 }, 00:13:52.885 { 00:13:52.885 "name": "BaseBdev4", 00:13:52.885 "uuid": "cf5dbd4d-4e63-4b78-a35e-979a06ea5ad4", 00:13:52.885 "is_configured": true, 00:13:52.885 "data_offset": 2048, 00:13:52.885 "data_size": 63488 00:13:52.885 } 00:13:52.886 ] 00:13:52.886 }' 00:13:52.886 21:46:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.886 21:46:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.455 [2024-11-27 21:46:16.408004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.455 "name": "Existed_Raid", 00:13:53.455 "uuid": "61972d7f-5117-42c4-b04b-a71e5267fa49", 00:13:53.455 "strip_size_kb": 64, 00:13:53.455 "state": "configuring", 00:13:53.455 "raid_level": "raid5f", 00:13:53.455 "superblock": true, 00:13:53.455 "num_base_bdevs": 4, 00:13:53.455 "num_base_bdevs_discovered": 2, 00:13:53.455 "num_base_bdevs_operational": 4, 00:13:53.455 "base_bdevs_list": [ 00:13:53.455 { 00:13:53.455 "name": "BaseBdev1", 00:13:53.455 "uuid": "41db51fe-9e9f-4320-8fcc-6262d63545c6", 00:13:53.455 "is_configured": true, 00:13:53.455 "data_offset": 2048, 00:13:53.455 "data_size": 63488 00:13:53.455 }, 00:13:53.455 { 00:13:53.455 "name": null, 00:13:53.455 "uuid": "25940539-33ea-4c7a-8a7f-20313b1fb5a7", 00:13:53.455 "is_configured": false, 00:13:53.455 "data_offset": 0, 00:13:53.455 "data_size": 63488 00:13:53.455 }, 00:13:53.455 { 00:13:53.455 "name": null, 00:13:53.455 "uuid": "83e5bb5d-f2c1-42dd-ba08-fdc3a833e909", 00:13:53.455 "is_configured": false, 00:13:53.455 "data_offset": 0, 00:13:53.455 "data_size": 63488 00:13:53.455 }, 00:13:53.455 { 00:13:53.455 "name": "BaseBdev4", 00:13:53.455 "uuid": "cf5dbd4d-4e63-4b78-a35e-979a06ea5ad4", 00:13:53.455 "is_configured": true, 00:13:53.455 "data_offset": 2048, 00:13:53.455 "data_size": 63488 00:13:53.455 } 00:13:53.455 ] 00:13:53.455 }' 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.455 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.024 [2024-11-27 21:46:16.895208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.024 "name": "Existed_Raid", 00:13:54.024 "uuid": "61972d7f-5117-42c4-b04b-a71e5267fa49", 00:13:54.024 "strip_size_kb": 64, 00:13:54.024 "state": "configuring", 00:13:54.024 "raid_level": "raid5f", 00:13:54.024 "superblock": true, 00:13:54.024 "num_base_bdevs": 4, 00:13:54.024 "num_base_bdevs_discovered": 3, 00:13:54.024 "num_base_bdevs_operational": 4, 00:13:54.024 "base_bdevs_list": [ 00:13:54.024 { 00:13:54.024 "name": "BaseBdev1", 00:13:54.024 "uuid": "41db51fe-9e9f-4320-8fcc-6262d63545c6", 00:13:54.024 "is_configured": true, 00:13:54.024 "data_offset": 2048, 00:13:54.024 "data_size": 63488 00:13:54.024 }, 00:13:54.024 { 00:13:54.024 "name": null, 00:13:54.024 "uuid": "25940539-33ea-4c7a-8a7f-20313b1fb5a7", 00:13:54.024 "is_configured": false, 00:13:54.024 "data_offset": 0, 00:13:54.024 "data_size": 63488 00:13:54.024 }, 00:13:54.024 { 00:13:54.024 "name": "BaseBdev3", 00:13:54.024 "uuid": "83e5bb5d-f2c1-42dd-ba08-fdc3a833e909", 00:13:54.024 "is_configured": true, 00:13:54.024 "data_offset": 2048, 00:13:54.024 "data_size": 63488 00:13:54.024 }, 00:13:54.024 { 00:13:54.024 "name": "BaseBdev4", 00:13:54.024 "uuid": "cf5dbd4d-4e63-4b78-a35e-979a06ea5ad4", 00:13:54.024 "is_configured": true, 00:13:54.024 "data_offset": 2048, 00:13:54.024 "data_size": 63488 00:13:54.024 } 00:13:54.024 ] 00:13:54.024 }' 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.024 21:46:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.283 [2024-11-27 21:46:17.346473] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.283 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.542 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.542 "name": "Existed_Raid", 00:13:54.542 "uuid": "61972d7f-5117-42c4-b04b-a71e5267fa49", 00:13:54.542 "strip_size_kb": 64, 00:13:54.542 "state": "configuring", 00:13:54.542 "raid_level": "raid5f", 00:13:54.542 "superblock": true, 00:13:54.542 "num_base_bdevs": 4, 00:13:54.542 "num_base_bdevs_discovered": 2, 00:13:54.542 "num_base_bdevs_operational": 4, 00:13:54.542 "base_bdevs_list": [ 00:13:54.542 { 00:13:54.542 "name": null, 00:13:54.542 "uuid": "41db51fe-9e9f-4320-8fcc-6262d63545c6", 00:13:54.542 "is_configured": false, 00:13:54.542 "data_offset": 0, 00:13:54.542 "data_size": 63488 00:13:54.542 }, 00:13:54.542 { 00:13:54.542 "name": null, 00:13:54.542 "uuid": "25940539-33ea-4c7a-8a7f-20313b1fb5a7", 00:13:54.542 "is_configured": false, 00:13:54.542 "data_offset": 0, 00:13:54.542 "data_size": 63488 00:13:54.542 }, 00:13:54.542 { 00:13:54.542 "name": "BaseBdev3", 00:13:54.542 "uuid": "83e5bb5d-f2c1-42dd-ba08-fdc3a833e909", 00:13:54.542 "is_configured": true, 00:13:54.542 "data_offset": 2048, 00:13:54.542 "data_size": 63488 00:13:54.542 }, 00:13:54.542 { 00:13:54.542 "name": "BaseBdev4", 00:13:54.542 "uuid": "cf5dbd4d-4e63-4b78-a35e-979a06ea5ad4", 00:13:54.542 "is_configured": true, 00:13:54.542 "data_offset": 2048, 00:13:54.542 "data_size": 63488 00:13:54.542 } 00:13:54.542 ] 00:13:54.542 }' 00:13:54.542 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.542 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.802 [2024-11-27 21:46:17.815976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:54.802 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:54.803 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.803 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.803 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.803 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.803 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.803 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.803 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:54.803 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:54.803 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.803 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.803 "name": "Existed_Raid", 00:13:54.803 "uuid": "61972d7f-5117-42c4-b04b-a71e5267fa49", 00:13:54.803 "strip_size_kb": 64, 00:13:54.803 "state": "configuring", 00:13:54.803 "raid_level": "raid5f", 00:13:54.803 "superblock": true, 00:13:54.803 "num_base_bdevs": 4, 00:13:54.803 "num_base_bdevs_discovered": 3, 00:13:54.803 "num_base_bdevs_operational": 4, 00:13:54.803 "base_bdevs_list": [ 00:13:54.803 { 00:13:54.803 "name": null, 00:13:54.803 "uuid": "41db51fe-9e9f-4320-8fcc-6262d63545c6", 00:13:54.803 "is_configured": false, 00:13:54.803 "data_offset": 0, 00:13:54.803 "data_size": 63488 00:13:54.803 }, 00:13:54.803 { 00:13:54.803 "name": "BaseBdev2", 00:13:54.803 "uuid": "25940539-33ea-4c7a-8a7f-20313b1fb5a7", 00:13:54.803 "is_configured": true, 00:13:54.803 "data_offset": 2048, 00:13:54.803 "data_size": 63488 00:13:54.803 }, 00:13:54.803 { 00:13:54.803 "name": "BaseBdev3", 00:13:54.803 "uuid": "83e5bb5d-f2c1-42dd-ba08-fdc3a833e909", 00:13:54.803 "is_configured": true, 00:13:54.803 "data_offset": 2048, 00:13:54.803 "data_size": 63488 00:13:54.803 }, 00:13:54.803 { 00:13:54.803 "name": "BaseBdev4", 00:13:54.803 "uuid": "cf5dbd4d-4e63-4b78-a35e-979a06ea5ad4", 00:13:54.803 "is_configured": true, 00:13:54.803 "data_offset": 2048, 00:13:54.803 "data_size": 63488 00:13:54.803 } 00:13:54.803 ] 00:13:54.803 }' 00:13:54.803 21:46:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.803 21:46:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 41db51fe-9e9f-4320-8fcc-6262d63545c6 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.373 [2024-11-27 21:46:18.397710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:55.373 [2024-11-27 21:46:18.397970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:13:55.373 [2024-11-27 21:46:18.398018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:55.373 [2024-11-27 21:46:18.398320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002a10 00:13:55.373 NewBaseBdev 00:13:55.373 [2024-11-27 21:46:18.398828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:13:55.373 [2024-11-27 21:46:18.398882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001c80 00:13:55.373 [2024-11-27 21:46:18.399030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.373 [ 00:13:55.373 { 00:13:55.373 "name": "NewBaseBdev", 00:13:55.373 "aliases": [ 00:13:55.373 "41db51fe-9e9f-4320-8fcc-6262d63545c6" 00:13:55.373 ], 00:13:55.373 "product_name": "Malloc disk", 00:13:55.373 "block_size": 512, 00:13:55.373 "num_blocks": 65536, 00:13:55.373 "uuid": "41db51fe-9e9f-4320-8fcc-6262d63545c6", 00:13:55.373 "assigned_rate_limits": { 00:13:55.373 "rw_ios_per_sec": 0, 00:13:55.373 "rw_mbytes_per_sec": 0, 00:13:55.373 "r_mbytes_per_sec": 0, 00:13:55.373 "w_mbytes_per_sec": 0 00:13:55.373 }, 00:13:55.373 "claimed": true, 00:13:55.373 "claim_type": "exclusive_write", 00:13:55.373 "zoned": false, 00:13:55.373 "supported_io_types": { 00:13:55.373 "read": true, 00:13:55.373 "write": true, 00:13:55.373 "unmap": true, 00:13:55.373 "flush": true, 00:13:55.373 "reset": true, 00:13:55.373 "nvme_admin": false, 00:13:55.373 "nvme_io": false, 00:13:55.373 "nvme_io_md": false, 00:13:55.373 "write_zeroes": true, 00:13:55.373 "zcopy": true, 00:13:55.373 "get_zone_info": false, 00:13:55.373 "zone_management": false, 00:13:55.373 "zone_append": false, 00:13:55.373 "compare": false, 00:13:55.373 "compare_and_write": false, 00:13:55.373 "abort": true, 00:13:55.373 "seek_hole": false, 00:13:55.373 "seek_data": false, 00:13:55.373 "copy": true, 00:13:55.373 "nvme_iov_md": false 00:13:55.373 }, 00:13:55.373 "memory_domains": [ 00:13:55.373 { 00:13:55.373 "dma_device_id": "system", 00:13:55.373 "dma_device_type": 1 00:13:55.373 }, 00:13:55.373 { 00:13:55.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:55.373 "dma_device_type": 2 00:13:55.373 } 00:13:55.373 ], 00:13:55.373 "driver_specific": {} 00:13:55.373 } 00:13:55.373 ] 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:55.373 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.633 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.633 "name": "Existed_Raid", 00:13:55.633 "uuid": "61972d7f-5117-42c4-b04b-a71e5267fa49", 00:13:55.633 "strip_size_kb": 64, 00:13:55.633 "state": "online", 00:13:55.633 "raid_level": "raid5f", 00:13:55.633 "superblock": true, 00:13:55.633 "num_base_bdevs": 4, 00:13:55.633 "num_base_bdevs_discovered": 4, 00:13:55.633 "num_base_bdevs_operational": 4, 00:13:55.633 "base_bdevs_list": [ 00:13:55.633 { 00:13:55.633 "name": "NewBaseBdev", 00:13:55.633 "uuid": "41db51fe-9e9f-4320-8fcc-6262d63545c6", 00:13:55.633 "is_configured": true, 00:13:55.633 "data_offset": 2048, 00:13:55.633 "data_size": 63488 00:13:55.633 }, 00:13:55.633 { 00:13:55.633 "name": "BaseBdev2", 00:13:55.633 "uuid": "25940539-33ea-4c7a-8a7f-20313b1fb5a7", 00:13:55.633 "is_configured": true, 00:13:55.633 "data_offset": 2048, 00:13:55.633 "data_size": 63488 00:13:55.633 }, 00:13:55.633 { 00:13:55.633 "name": "BaseBdev3", 00:13:55.633 "uuid": "83e5bb5d-f2c1-42dd-ba08-fdc3a833e909", 00:13:55.633 "is_configured": true, 00:13:55.633 "data_offset": 2048, 00:13:55.633 "data_size": 63488 00:13:55.633 }, 00:13:55.633 { 00:13:55.633 "name": "BaseBdev4", 00:13:55.633 "uuid": "cf5dbd4d-4e63-4b78-a35e-979a06ea5ad4", 00:13:55.633 "is_configured": true, 00:13:55.633 "data_offset": 2048, 00:13:55.633 "data_size": 63488 00:13:55.633 } 00:13:55.633 ] 00:13:55.633 }' 00:13:55.633 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.633 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.892 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:55.892 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:55.892 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:55.892 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:55.892 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:55.893 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:55.893 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:55.893 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.893 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:55.893 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:55.893 [2024-11-27 21:46:18.893081] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:55.893 21:46:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.893 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:55.893 "name": "Existed_Raid", 00:13:55.893 "aliases": [ 00:13:55.893 "61972d7f-5117-42c4-b04b-a71e5267fa49" 00:13:55.893 ], 00:13:55.893 "product_name": "Raid Volume", 00:13:55.893 "block_size": 512, 00:13:55.893 "num_blocks": 190464, 00:13:55.893 "uuid": "61972d7f-5117-42c4-b04b-a71e5267fa49", 00:13:55.893 "assigned_rate_limits": { 00:13:55.893 "rw_ios_per_sec": 0, 00:13:55.893 "rw_mbytes_per_sec": 0, 00:13:55.893 "r_mbytes_per_sec": 0, 00:13:55.893 "w_mbytes_per_sec": 0 00:13:55.893 }, 00:13:55.893 "claimed": false, 00:13:55.893 "zoned": false, 00:13:55.893 "supported_io_types": { 00:13:55.893 "read": true, 00:13:55.893 "write": true, 00:13:55.893 "unmap": false, 00:13:55.893 "flush": false, 00:13:55.893 "reset": true, 00:13:55.893 "nvme_admin": false, 00:13:55.893 "nvme_io": false, 00:13:55.893 "nvme_io_md": false, 00:13:55.893 "write_zeroes": true, 00:13:55.893 "zcopy": false, 00:13:55.893 "get_zone_info": false, 00:13:55.893 "zone_management": false, 00:13:55.893 "zone_append": false, 00:13:55.893 "compare": false, 00:13:55.893 "compare_and_write": false, 00:13:55.893 "abort": false, 00:13:55.893 "seek_hole": false, 00:13:55.893 "seek_data": false, 00:13:55.893 "copy": false, 00:13:55.893 "nvme_iov_md": false 00:13:55.893 }, 00:13:55.893 "driver_specific": { 00:13:55.893 "raid": { 00:13:55.893 "uuid": "61972d7f-5117-42c4-b04b-a71e5267fa49", 00:13:55.893 "strip_size_kb": 64, 00:13:55.893 "state": "online", 00:13:55.893 "raid_level": "raid5f", 00:13:55.893 "superblock": true, 00:13:55.893 "num_base_bdevs": 4, 00:13:55.893 "num_base_bdevs_discovered": 4, 00:13:55.893 "num_base_bdevs_operational": 4, 00:13:55.893 "base_bdevs_list": [ 00:13:55.893 { 00:13:55.893 "name": "NewBaseBdev", 00:13:55.893 "uuid": "41db51fe-9e9f-4320-8fcc-6262d63545c6", 00:13:55.893 "is_configured": true, 00:13:55.893 "data_offset": 2048, 00:13:55.893 "data_size": 63488 00:13:55.893 }, 00:13:55.893 { 00:13:55.893 "name": "BaseBdev2", 00:13:55.893 "uuid": "25940539-33ea-4c7a-8a7f-20313b1fb5a7", 00:13:55.893 "is_configured": true, 00:13:55.893 "data_offset": 2048, 00:13:55.893 "data_size": 63488 00:13:55.893 }, 00:13:55.893 { 00:13:55.893 "name": "BaseBdev3", 00:13:55.893 "uuid": "83e5bb5d-f2c1-42dd-ba08-fdc3a833e909", 00:13:55.893 "is_configured": true, 00:13:55.893 "data_offset": 2048, 00:13:55.893 "data_size": 63488 00:13:55.893 }, 00:13:55.893 { 00:13:55.893 "name": "BaseBdev4", 00:13:55.893 "uuid": "cf5dbd4d-4e63-4b78-a35e-979a06ea5ad4", 00:13:55.893 "is_configured": true, 00:13:55.893 "data_offset": 2048, 00:13:55.893 "data_size": 63488 00:13:55.893 } 00:13:55.893 ] 00:13:55.893 } 00:13:55.893 } 00:13:55.893 }' 00:13:55.893 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:55.893 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:55.893 BaseBdev2 00:13:55.893 BaseBdev3 00:13:55.893 BaseBdev4' 00:13:55.893 21:46:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.168 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.169 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.169 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:56.169 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:56.169 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:56.169 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.169 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.169 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.169 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:56.169 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:56.169 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:56.169 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.169 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.169 [2024-11-27 21:46:19.232318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:56.169 [2024-11-27 21:46:19.232345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:56.169 [2024-11-27 21:46:19.232413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.169 [2024-11-27 21:46:19.232669] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.169 [2024-11-27 21:46:19.232680] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name Existed_Raid, state offline 00:13:56.169 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.170 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 93562 00:13:56.170 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 93562 ']' 00:13:56.170 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 93562 00:13:56.170 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:56.170 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.170 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93562 00:13:56.170 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.170 killing process with pid 93562 00:13:56.170 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.170 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93562' 00:13:56.170 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 93562 00:13:56.170 [2024-11-27 21:46:19.271639] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:56.170 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 93562 00:13:56.438 [2024-11-27 21:46:19.310957] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:56.438 21:46:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:56.438 00:13:56.438 real 0m9.460s 00:13:56.438 user 0m16.164s 00:13:56.438 sys 0m2.066s 00:13:56.438 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.438 21:46:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:56.438 ************************************ 00:13:56.438 END TEST raid5f_state_function_test_sb 00:13:56.438 ************************************ 00:13:56.697 21:46:19 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:13:56.697 21:46:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:56.697 21:46:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.697 21:46:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:56.697 ************************************ 00:13:56.697 START TEST raid5f_superblock_test 00:13:56.697 ************************************ 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=94210 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 94210 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 94210 ']' 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.697 21:46:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.697 [2024-11-27 21:46:19.680612] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:13:56.697 [2024-11-27 21:46:19.680826] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94210 ] 00:13:56.697 [2024-11-27 21:46:19.811905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.957 [2024-11-27 21:46:19.836168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.957 [2024-11-27 21:46:19.877124] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:56.957 [2024-11-27 21:46:19.877241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.526 malloc1 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.526 [2024-11-27 21:46:20.531590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:57.526 [2024-11-27 21:46:20.531703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.526 [2024-11-27 21:46:20.531742] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:13:57.526 [2024-11-27 21:46:20.531809] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.526 [2024-11-27 21:46:20.533961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.526 [2024-11-27 21:46:20.534028] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:57.526 pt1 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.526 malloc2 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.526 [2024-11-27 21:46:20.563746] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:57.526 [2024-11-27 21:46:20.563825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.526 [2024-11-27 21:46:20.563843] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:57.526 [2024-11-27 21:46:20.563853] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.526 [2024-11-27 21:46:20.565868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.526 [2024-11-27 21:46:20.565899] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:57.526 pt2 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.526 malloc3 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.526 [2024-11-27 21:46:20.591901] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:57.526 [2024-11-27 21:46:20.591998] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.526 [2024-11-27 21:46:20.592033] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:57.526 [2024-11-27 21:46:20.592069] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.526 [2024-11-27 21:46:20.594146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.526 [2024-11-27 21:46:20.594227] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:57.526 pt3 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:57.526 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:57.527 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:57.527 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:57.527 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:57.527 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:57.527 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:57.527 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.527 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.527 malloc4 00:13:57.527 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.527 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:57.527 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.527 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.527 [2024-11-27 21:46:20.641227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:57.527 [2024-11-27 21:46:20.641396] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.527 [2024-11-27 21:46:20.641466] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:57.527 [2024-11-27 21:46:20.641542] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.527 [2024-11-27 21:46:20.645861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.527 [2024-11-27 21:46:20.645992] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:57.786 pt4 00:13:57.786 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.787 [2024-11-27 21:46:20.654270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:57.787 [2024-11-27 21:46:20.656921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:57.787 [2024-11-27 21:46:20.657059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:57.787 [2024-11-27 21:46:20.657188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:57.787 [2024-11-27 21:46:20.657489] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:13:57.787 [2024-11-27 21:46:20.657538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:57.787 [2024-11-27 21:46:20.657835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:13:57.787 [2024-11-27 21:46:20.658351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:13:57.787 [2024-11-27 21:46:20.658399] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:13:57.787 [2024-11-27 21:46:20.658637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.787 "name": "raid_bdev1", 00:13:57.787 "uuid": "aeb1910f-1431-4b09-bcad-0bb0664e8d9e", 00:13:57.787 "strip_size_kb": 64, 00:13:57.787 "state": "online", 00:13:57.787 "raid_level": "raid5f", 00:13:57.787 "superblock": true, 00:13:57.787 "num_base_bdevs": 4, 00:13:57.787 "num_base_bdevs_discovered": 4, 00:13:57.787 "num_base_bdevs_operational": 4, 00:13:57.787 "base_bdevs_list": [ 00:13:57.787 { 00:13:57.787 "name": "pt1", 00:13:57.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:57.787 "is_configured": true, 00:13:57.787 "data_offset": 2048, 00:13:57.787 "data_size": 63488 00:13:57.787 }, 00:13:57.787 { 00:13:57.787 "name": "pt2", 00:13:57.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:57.787 "is_configured": true, 00:13:57.787 "data_offset": 2048, 00:13:57.787 "data_size": 63488 00:13:57.787 }, 00:13:57.787 { 00:13:57.787 "name": "pt3", 00:13:57.787 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:57.787 "is_configured": true, 00:13:57.787 "data_offset": 2048, 00:13:57.787 "data_size": 63488 00:13:57.787 }, 00:13:57.787 { 00:13:57.787 "name": "pt4", 00:13:57.787 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:57.787 "is_configured": true, 00:13:57.787 "data_offset": 2048, 00:13:57.787 "data_size": 63488 00:13:57.787 } 00:13:57.787 ] 00:13:57.787 }' 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.787 21:46:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.047 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:58.047 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:58.047 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:58.047 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:58.047 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:58.047 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:58.047 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:58.047 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.047 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.047 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:58.047 [2024-11-27 21:46:21.110033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.047 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.047 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:58.047 "name": "raid_bdev1", 00:13:58.047 "aliases": [ 00:13:58.047 "aeb1910f-1431-4b09-bcad-0bb0664e8d9e" 00:13:58.047 ], 00:13:58.047 "product_name": "Raid Volume", 00:13:58.047 "block_size": 512, 00:13:58.047 "num_blocks": 190464, 00:13:58.047 "uuid": "aeb1910f-1431-4b09-bcad-0bb0664e8d9e", 00:13:58.047 "assigned_rate_limits": { 00:13:58.047 "rw_ios_per_sec": 0, 00:13:58.047 "rw_mbytes_per_sec": 0, 00:13:58.047 "r_mbytes_per_sec": 0, 00:13:58.047 "w_mbytes_per_sec": 0 00:13:58.047 }, 00:13:58.047 "claimed": false, 00:13:58.047 "zoned": false, 00:13:58.047 "supported_io_types": { 00:13:58.047 "read": true, 00:13:58.047 "write": true, 00:13:58.047 "unmap": false, 00:13:58.047 "flush": false, 00:13:58.047 "reset": true, 00:13:58.047 "nvme_admin": false, 00:13:58.047 "nvme_io": false, 00:13:58.047 "nvme_io_md": false, 00:13:58.047 "write_zeroes": true, 00:13:58.047 "zcopy": false, 00:13:58.047 "get_zone_info": false, 00:13:58.047 "zone_management": false, 00:13:58.047 "zone_append": false, 00:13:58.047 "compare": false, 00:13:58.047 "compare_and_write": false, 00:13:58.047 "abort": false, 00:13:58.047 "seek_hole": false, 00:13:58.047 "seek_data": false, 00:13:58.047 "copy": false, 00:13:58.047 "nvme_iov_md": false 00:13:58.047 }, 00:13:58.047 "driver_specific": { 00:13:58.047 "raid": { 00:13:58.047 "uuid": "aeb1910f-1431-4b09-bcad-0bb0664e8d9e", 00:13:58.047 "strip_size_kb": 64, 00:13:58.047 "state": "online", 00:13:58.047 "raid_level": "raid5f", 00:13:58.047 "superblock": true, 00:13:58.047 "num_base_bdevs": 4, 00:13:58.047 "num_base_bdevs_discovered": 4, 00:13:58.047 "num_base_bdevs_operational": 4, 00:13:58.047 "base_bdevs_list": [ 00:13:58.047 { 00:13:58.047 "name": "pt1", 00:13:58.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:58.047 "is_configured": true, 00:13:58.047 "data_offset": 2048, 00:13:58.047 "data_size": 63488 00:13:58.047 }, 00:13:58.047 { 00:13:58.047 "name": "pt2", 00:13:58.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:58.047 "is_configured": true, 00:13:58.047 "data_offset": 2048, 00:13:58.047 "data_size": 63488 00:13:58.047 }, 00:13:58.047 { 00:13:58.047 "name": "pt3", 00:13:58.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:58.047 "is_configured": true, 00:13:58.047 "data_offset": 2048, 00:13:58.047 "data_size": 63488 00:13:58.047 }, 00:13:58.047 { 00:13:58.047 "name": "pt4", 00:13:58.047 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:58.048 "is_configured": true, 00:13:58.048 "data_offset": 2048, 00:13:58.048 "data_size": 63488 00:13:58.048 } 00:13:58.048 ] 00:13:58.048 } 00:13:58.048 } 00:13:58.048 }' 00:13:58.048 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:58.308 pt2 00:13:58.308 pt3 00:13:58.308 pt4' 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.308 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:58.309 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.569 [2024-11-27 21:46:21.457385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=aeb1910f-1431-4b09-bcad-0bb0664e8d9e 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z aeb1910f-1431-4b09-bcad-0bb0664e8d9e ']' 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.569 [2024-11-27 21:46:21.505140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:58.569 [2024-11-27 21:46:21.505205] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:58.569 [2024-11-27 21:46:21.505302] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:58.569 [2024-11-27 21:46:21.505402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:58.569 [2024-11-27 21:46:21.505458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:58.569 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.570 [2024-11-27 21:46:21.648927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:58.570 [2024-11-27 21:46:21.650714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:58.570 [2024-11-27 21:46:21.650760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:58.570 [2024-11-27 21:46:21.650787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:58.570 [2024-11-27 21:46:21.650848] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:58.570 [2024-11-27 21:46:21.650887] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:58.570 [2024-11-27 21:46:21.650905] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:58.570 [2024-11-27 21:46:21.650920] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:58.570 [2024-11-27 21:46:21.650933] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:58.570 [2024-11-27 21:46:21.650943] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:13:58.570 request: 00:13:58.570 { 00:13:58.570 "name": "raid_bdev1", 00:13:58.570 "raid_level": "raid5f", 00:13:58.570 "base_bdevs": [ 00:13:58.570 "malloc1", 00:13:58.570 "malloc2", 00:13:58.570 "malloc3", 00:13:58.570 "malloc4" 00:13:58.570 ], 00:13:58.570 "strip_size_kb": 64, 00:13:58.570 "superblock": false, 00:13:58.570 "method": "bdev_raid_create", 00:13:58.570 "req_id": 1 00:13:58.570 } 00:13:58.570 Got JSON-RPC error response 00:13:58.570 response: 00:13:58.570 { 00:13:58.570 "code": -17, 00:13:58.570 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:58.570 } 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:58.570 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.830 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:58.830 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:58.830 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:58.830 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.830 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.830 [2024-11-27 21:46:21.716771] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:58.830 [2024-11-27 21:46:21.716872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.830 [2024-11-27 21:46:21.716914] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:58.830 [2024-11-27 21:46:21.716942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.830 [2024-11-27 21:46:21.719063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.831 [2024-11-27 21:46:21.719142] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:58.831 [2024-11-27 21:46:21.719226] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:58.831 [2024-11-27 21:46:21.719287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:58.831 pt1 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.831 "name": "raid_bdev1", 00:13:58.831 "uuid": "aeb1910f-1431-4b09-bcad-0bb0664e8d9e", 00:13:58.831 "strip_size_kb": 64, 00:13:58.831 "state": "configuring", 00:13:58.831 "raid_level": "raid5f", 00:13:58.831 "superblock": true, 00:13:58.831 "num_base_bdevs": 4, 00:13:58.831 "num_base_bdevs_discovered": 1, 00:13:58.831 "num_base_bdevs_operational": 4, 00:13:58.831 "base_bdevs_list": [ 00:13:58.831 { 00:13:58.831 "name": "pt1", 00:13:58.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:58.831 "is_configured": true, 00:13:58.831 "data_offset": 2048, 00:13:58.831 "data_size": 63488 00:13:58.831 }, 00:13:58.831 { 00:13:58.831 "name": null, 00:13:58.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:58.831 "is_configured": false, 00:13:58.831 "data_offset": 2048, 00:13:58.831 "data_size": 63488 00:13:58.831 }, 00:13:58.831 { 00:13:58.831 "name": null, 00:13:58.831 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:58.831 "is_configured": false, 00:13:58.831 "data_offset": 2048, 00:13:58.831 "data_size": 63488 00:13:58.831 }, 00:13:58.831 { 00:13:58.831 "name": null, 00:13:58.831 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:58.831 "is_configured": false, 00:13:58.831 "data_offset": 2048, 00:13:58.831 "data_size": 63488 00:13:58.831 } 00:13:58.831 ] 00:13:58.831 }' 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.831 21:46:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 [2024-11-27 21:46:22.128076] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:59.091 [2024-11-27 21:46:22.128163] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.091 [2024-11-27 21:46:22.128200] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:13:59.091 [2024-11-27 21:46:22.128226] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.091 [2024-11-27 21:46:22.128642] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.091 [2024-11-27 21:46:22.128696] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:59.091 [2024-11-27 21:46:22.128804] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:59.091 [2024-11-27 21:46:22.128852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:59.091 pt2 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 [2024-11-27 21:46:22.140083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.091 "name": "raid_bdev1", 00:13:59.091 "uuid": "aeb1910f-1431-4b09-bcad-0bb0664e8d9e", 00:13:59.091 "strip_size_kb": 64, 00:13:59.091 "state": "configuring", 00:13:59.091 "raid_level": "raid5f", 00:13:59.091 "superblock": true, 00:13:59.091 "num_base_bdevs": 4, 00:13:59.091 "num_base_bdevs_discovered": 1, 00:13:59.091 "num_base_bdevs_operational": 4, 00:13:59.091 "base_bdevs_list": [ 00:13:59.091 { 00:13:59.091 "name": "pt1", 00:13:59.091 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:59.091 "is_configured": true, 00:13:59.091 "data_offset": 2048, 00:13:59.091 "data_size": 63488 00:13:59.091 }, 00:13:59.091 { 00:13:59.091 "name": null, 00:13:59.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:59.091 "is_configured": false, 00:13:59.091 "data_offset": 0, 00:13:59.091 "data_size": 63488 00:13:59.091 }, 00:13:59.091 { 00:13:59.091 "name": null, 00:13:59.091 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:59.091 "is_configured": false, 00:13:59.091 "data_offset": 2048, 00:13:59.091 "data_size": 63488 00:13:59.091 }, 00:13:59.091 { 00:13:59.091 "name": null, 00:13:59.091 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:59.091 "is_configured": false, 00:13:59.091 "data_offset": 2048, 00:13:59.091 "data_size": 63488 00:13:59.091 } 00:13:59.091 ] 00:13:59.091 }' 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.091 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.661 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:59.661 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:59.661 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:59.661 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.661 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.661 [2024-11-27 21:46:22.595291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:59.661 [2024-11-27 21:46:22.595410] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.662 [2024-11-27 21:46:22.595442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:13:59.662 [2024-11-27 21:46:22.595472] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.662 [2024-11-27 21:46:22.595901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.662 [2024-11-27 21:46:22.595958] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:59.662 [2024-11-27 21:46:22.596072] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:59.662 [2024-11-27 21:46:22.596132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:59.662 pt2 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.662 [2024-11-27 21:46:22.607225] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:59.662 [2024-11-27 21:46:22.607313] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.662 [2024-11-27 21:46:22.607342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:59.662 [2024-11-27 21:46:22.607368] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.662 [2024-11-27 21:46:22.607744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.662 [2024-11-27 21:46:22.607805] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:59.662 [2024-11-27 21:46:22.607893] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:59.662 [2024-11-27 21:46:22.607941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:59.662 pt3 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.662 [2024-11-27 21:46:22.619210] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:59.662 [2024-11-27 21:46:22.619253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.662 [2024-11-27 21:46:22.619281] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:59.662 [2024-11-27 21:46:22.619290] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.662 [2024-11-27 21:46:22.619552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.662 [2024-11-27 21:46:22.619568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:59.662 [2024-11-27 21:46:22.619615] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:59.662 [2024-11-27 21:46:22.619632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:59.662 [2024-11-27 21:46:22.619737] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:13:59.662 [2024-11-27 21:46:22.619750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:13:59.662 [2024-11-27 21:46:22.619977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:13:59.662 [2024-11-27 21:46:22.620457] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:13:59.662 [2024-11-27 21:46:22.620474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:13:59.662 [2024-11-27 21:46:22.620570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.662 pt4 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.662 "name": "raid_bdev1", 00:13:59.662 "uuid": "aeb1910f-1431-4b09-bcad-0bb0664e8d9e", 00:13:59.662 "strip_size_kb": 64, 00:13:59.662 "state": "online", 00:13:59.662 "raid_level": "raid5f", 00:13:59.662 "superblock": true, 00:13:59.662 "num_base_bdevs": 4, 00:13:59.662 "num_base_bdevs_discovered": 4, 00:13:59.662 "num_base_bdevs_operational": 4, 00:13:59.662 "base_bdevs_list": [ 00:13:59.662 { 00:13:59.662 "name": "pt1", 00:13:59.662 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:59.662 "is_configured": true, 00:13:59.662 "data_offset": 2048, 00:13:59.662 "data_size": 63488 00:13:59.662 }, 00:13:59.662 { 00:13:59.662 "name": "pt2", 00:13:59.662 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:59.662 "is_configured": true, 00:13:59.662 "data_offset": 2048, 00:13:59.662 "data_size": 63488 00:13:59.662 }, 00:13:59.662 { 00:13:59.662 "name": "pt3", 00:13:59.662 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:59.662 "is_configured": true, 00:13:59.662 "data_offset": 2048, 00:13:59.662 "data_size": 63488 00:13:59.662 }, 00:13:59.662 { 00:13:59.662 "name": "pt4", 00:13:59.662 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:59.662 "is_configured": true, 00:13:59.662 "data_offset": 2048, 00:13:59.662 "data_size": 63488 00:13:59.662 } 00:13:59.662 ] 00:13:59.662 }' 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.662 21:46:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.922 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:59.922 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:59.922 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:59.922 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:59.922 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:59.922 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:59.922 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:59.922 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.922 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.922 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:59.922 [2024-11-27 21:46:23.026688] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.922 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.182 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:00.182 "name": "raid_bdev1", 00:14:00.182 "aliases": [ 00:14:00.182 "aeb1910f-1431-4b09-bcad-0bb0664e8d9e" 00:14:00.182 ], 00:14:00.182 "product_name": "Raid Volume", 00:14:00.182 "block_size": 512, 00:14:00.182 "num_blocks": 190464, 00:14:00.182 "uuid": "aeb1910f-1431-4b09-bcad-0bb0664e8d9e", 00:14:00.182 "assigned_rate_limits": { 00:14:00.182 "rw_ios_per_sec": 0, 00:14:00.182 "rw_mbytes_per_sec": 0, 00:14:00.182 "r_mbytes_per_sec": 0, 00:14:00.182 "w_mbytes_per_sec": 0 00:14:00.182 }, 00:14:00.182 "claimed": false, 00:14:00.182 "zoned": false, 00:14:00.182 "supported_io_types": { 00:14:00.182 "read": true, 00:14:00.182 "write": true, 00:14:00.182 "unmap": false, 00:14:00.182 "flush": false, 00:14:00.182 "reset": true, 00:14:00.182 "nvme_admin": false, 00:14:00.182 "nvme_io": false, 00:14:00.182 "nvme_io_md": false, 00:14:00.182 "write_zeroes": true, 00:14:00.182 "zcopy": false, 00:14:00.182 "get_zone_info": false, 00:14:00.182 "zone_management": false, 00:14:00.182 "zone_append": false, 00:14:00.182 "compare": false, 00:14:00.182 "compare_and_write": false, 00:14:00.182 "abort": false, 00:14:00.182 "seek_hole": false, 00:14:00.182 "seek_data": false, 00:14:00.182 "copy": false, 00:14:00.182 "nvme_iov_md": false 00:14:00.182 }, 00:14:00.182 "driver_specific": { 00:14:00.182 "raid": { 00:14:00.182 "uuid": "aeb1910f-1431-4b09-bcad-0bb0664e8d9e", 00:14:00.182 "strip_size_kb": 64, 00:14:00.182 "state": "online", 00:14:00.182 "raid_level": "raid5f", 00:14:00.182 "superblock": true, 00:14:00.182 "num_base_bdevs": 4, 00:14:00.182 "num_base_bdevs_discovered": 4, 00:14:00.182 "num_base_bdevs_operational": 4, 00:14:00.182 "base_bdevs_list": [ 00:14:00.182 { 00:14:00.182 "name": "pt1", 00:14:00.182 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:00.182 "is_configured": true, 00:14:00.182 "data_offset": 2048, 00:14:00.182 "data_size": 63488 00:14:00.182 }, 00:14:00.182 { 00:14:00.182 "name": "pt2", 00:14:00.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.182 "is_configured": true, 00:14:00.182 "data_offset": 2048, 00:14:00.182 "data_size": 63488 00:14:00.182 }, 00:14:00.182 { 00:14:00.182 "name": "pt3", 00:14:00.182 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.182 "is_configured": true, 00:14:00.183 "data_offset": 2048, 00:14:00.183 "data_size": 63488 00:14:00.183 }, 00:14:00.183 { 00:14:00.183 "name": "pt4", 00:14:00.183 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:00.183 "is_configured": true, 00:14:00.183 "data_offset": 2048, 00:14:00.183 "data_size": 63488 00:14:00.183 } 00:14:00.183 ] 00:14:00.183 } 00:14:00.183 } 00:14:00.183 }' 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:00.183 pt2 00:14:00.183 pt3 00:14:00.183 pt4' 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:00.183 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.443 [2024-11-27 21:46:23.338135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' aeb1910f-1431-4b09-bcad-0bb0664e8d9e '!=' aeb1910f-1431-4b09-bcad-0bb0664e8d9e ']' 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.443 [2024-11-27 21:46:23.377930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.443 "name": "raid_bdev1", 00:14:00.443 "uuid": "aeb1910f-1431-4b09-bcad-0bb0664e8d9e", 00:14:00.443 "strip_size_kb": 64, 00:14:00.443 "state": "online", 00:14:00.443 "raid_level": "raid5f", 00:14:00.443 "superblock": true, 00:14:00.443 "num_base_bdevs": 4, 00:14:00.443 "num_base_bdevs_discovered": 3, 00:14:00.443 "num_base_bdevs_operational": 3, 00:14:00.443 "base_bdevs_list": [ 00:14:00.443 { 00:14:00.443 "name": null, 00:14:00.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.443 "is_configured": false, 00:14:00.443 "data_offset": 0, 00:14:00.443 "data_size": 63488 00:14:00.443 }, 00:14:00.443 { 00:14:00.443 "name": "pt2", 00:14:00.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:00.443 "is_configured": true, 00:14:00.443 "data_offset": 2048, 00:14:00.443 "data_size": 63488 00:14:00.443 }, 00:14:00.443 { 00:14:00.443 "name": "pt3", 00:14:00.443 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:00.443 "is_configured": true, 00:14:00.443 "data_offset": 2048, 00:14:00.443 "data_size": 63488 00:14:00.443 }, 00:14:00.443 { 00:14:00.443 "name": "pt4", 00:14:00.443 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:00.443 "is_configured": true, 00:14:00.443 "data_offset": 2048, 00:14:00.443 "data_size": 63488 00:14:00.443 } 00:14:00.443 ] 00:14:00.443 }' 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.443 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.014 [2024-11-27 21:46:23.877046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:01.014 [2024-11-27 21:46:23.877072] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:01.014 [2024-11-27 21:46:23.877146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:01.014 [2024-11-27 21:46:23.877216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:01.014 [2024-11-27 21:46:23.877227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.014 [2024-11-27 21:46:23.976869] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:01.014 [2024-11-27 21:46:23.976916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.014 [2024-11-27 21:46:23.976931] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:01.014 [2024-11-27 21:46:23.976941] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.014 [2024-11-27 21:46:23.979022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.014 [2024-11-27 21:46:23.979059] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:01.014 [2024-11-27 21:46:23.979127] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:01.014 [2024-11-27 21:46:23.979163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:01.014 pt2 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.014 21:46:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.015 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.015 21:46:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.015 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.015 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.015 "name": "raid_bdev1", 00:14:01.015 "uuid": "aeb1910f-1431-4b09-bcad-0bb0664e8d9e", 00:14:01.015 "strip_size_kb": 64, 00:14:01.015 "state": "configuring", 00:14:01.015 "raid_level": "raid5f", 00:14:01.015 "superblock": true, 00:14:01.015 "num_base_bdevs": 4, 00:14:01.015 "num_base_bdevs_discovered": 1, 00:14:01.015 "num_base_bdevs_operational": 3, 00:14:01.015 "base_bdevs_list": [ 00:14:01.015 { 00:14:01.015 "name": null, 00:14:01.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.015 "is_configured": false, 00:14:01.015 "data_offset": 2048, 00:14:01.015 "data_size": 63488 00:14:01.015 }, 00:14:01.015 { 00:14:01.015 "name": "pt2", 00:14:01.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.015 "is_configured": true, 00:14:01.015 "data_offset": 2048, 00:14:01.015 "data_size": 63488 00:14:01.015 }, 00:14:01.015 { 00:14:01.015 "name": null, 00:14:01.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.015 "is_configured": false, 00:14:01.015 "data_offset": 2048, 00:14:01.015 "data_size": 63488 00:14:01.015 }, 00:14:01.015 { 00:14:01.015 "name": null, 00:14:01.015 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:01.015 "is_configured": false, 00:14:01.015 "data_offset": 2048, 00:14:01.015 "data_size": 63488 00:14:01.015 } 00:14:01.015 ] 00:14:01.015 }' 00:14:01.015 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.015 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.592 [2024-11-27 21:46:24.424215] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:01.592 [2024-11-27 21:46:24.424314] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.592 [2024-11-27 21:46:24.424350] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:01.592 [2024-11-27 21:46:24.424382] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.592 [2024-11-27 21:46:24.424815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.592 [2024-11-27 21:46:24.424873] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:01.592 [2024-11-27 21:46:24.424983] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:01.592 [2024-11-27 21:46:24.425038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:01.592 pt3 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.592 "name": "raid_bdev1", 00:14:01.592 "uuid": "aeb1910f-1431-4b09-bcad-0bb0664e8d9e", 00:14:01.592 "strip_size_kb": 64, 00:14:01.592 "state": "configuring", 00:14:01.592 "raid_level": "raid5f", 00:14:01.592 "superblock": true, 00:14:01.592 "num_base_bdevs": 4, 00:14:01.592 "num_base_bdevs_discovered": 2, 00:14:01.592 "num_base_bdevs_operational": 3, 00:14:01.592 "base_bdevs_list": [ 00:14:01.592 { 00:14:01.592 "name": null, 00:14:01.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.592 "is_configured": false, 00:14:01.592 "data_offset": 2048, 00:14:01.592 "data_size": 63488 00:14:01.592 }, 00:14:01.592 { 00:14:01.592 "name": "pt2", 00:14:01.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.592 "is_configured": true, 00:14:01.592 "data_offset": 2048, 00:14:01.592 "data_size": 63488 00:14:01.592 }, 00:14:01.592 { 00:14:01.592 "name": "pt3", 00:14:01.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.592 "is_configured": true, 00:14:01.592 "data_offset": 2048, 00:14:01.592 "data_size": 63488 00:14:01.592 }, 00:14:01.592 { 00:14:01.592 "name": null, 00:14:01.592 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:01.592 "is_configured": false, 00:14:01.592 "data_offset": 2048, 00:14:01.592 "data_size": 63488 00:14:01.592 } 00:14:01.592 ] 00:14:01.592 }' 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.592 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.865 [2024-11-27 21:46:24.851496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:01.865 [2024-11-27 21:46:24.851566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.865 [2024-11-27 21:46:24.851587] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:01.865 [2024-11-27 21:46:24.851599] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.865 [2024-11-27 21:46:24.852031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.865 [2024-11-27 21:46:24.852052] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:01.865 [2024-11-27 21:46:24.852151] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:01.865 [2024-11-27 21:46:24.852175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:01.865 [2024-11-27 21:46:24.852274] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:01.865 [2024-11-27 21:46:24.852284] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:01.865 [2024-11-27 21:46:24.852547] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:14:01.865 pt4 00:14:01.865 [2024-11-27 21:46:24.853160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:01.865 [2024-11-27 21:46:24.853179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:14:01.865 [2024-11-27 21:46:24.853412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.865 "name": "raid_bdev1", 00:14:01.865 "uuid": "aeb1910f-1431-4b09-bcad-0bb0664e8d9e", 00:14:01.865 "strip_size_kb": 64, 00:14:01.865 "state": "online", 00:14:01.865 "raid_level": "raid5f", 00:14:01.865 "superblock": true, 00:14:01.865 "num_base_bdevs": 4, 00:14:01.865 "num_base_bdevs_discovered": 3, 00:14:01.865 "num_base_bdevs_operational": 3, 00:14:01.865 "base_bdevs_list": [ 00:14:01.865 { 00:14:01.865 "name": null, 00:14:01.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.865 "is_configured": false, 00:14:01.865 "data_offset": 2048, 00:14:01.865 "data_size": 63488 00:14:01.865 }, 00:14:01.865 { 00:14:01.865 "name": "pt2", 00:14:01.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:01.865 "is_configured": true, 00:14:01.865 "data_offset": 2048, 00:14:01.865 "data_size": 63488 00:14:01.865 }, 00:14:01.865 { 00:14:01.865 "name": "pt3", 00:14:01.865 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:01.865 "is_configured": true, 00:14:01.865 "data_offset": 2048, 00:14:01.865 "data_size": 63488 00:14:01.865 }, 00:14:01.865 { 00:14:01.865 "name": "pt4", 00:14:01.865 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:01.865 "is_configured": true, 00:14:01.865 "data_offset": 2048, 00:14:01.865 "data_size": 63488 00:14:01.865 } 00:14:01.865 ] 00:14:01.865 }' 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.865 21:46:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.126 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:02.126 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.126 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.126 [2024-11-27 21:46:25.242827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.126 [2024-11-27 21:46:25.242854] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.126 [2024-11-27 21:46:25.242922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.126 [2024-11-27 21:46:25.242995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.126 [2024-11-27 21:46:25.243004] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.387 [2024-11-27 21:46:25.302737] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:02.387 [2024-11-27 21:46:25.302789] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.387 [2024-11-27 21:46:25.302822] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:02.387 [2024-11-27 21:46:25.302847] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.387 [2024-11-27 21:46:25.305024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.387 [2024-11-27 21:46:25.305059] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:02.387 [2024-11-27 21:46:25.305125] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:02.387 [2024-11-27 21:46:25.305159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:02.387 [2024-11-27 21:46:25.305256] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:02.387 [2024-11-27 21:46:25.305267] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.387 [2024-11-27 21:46:25.305295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:14:02.387 [2024-11-27 21:46:25.305332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:02.387 [2024-11-27 21:46:25.305436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:02.387 pt1 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.387 "name": "raid_bdev1", 00:14:02.387 "uuid": "aeb1910f-1431-4b09-bcad-0bb0664e8d9e", 00:14:02.387 "strip_size_kb": 64, 00:14:02.387 "state": "configuring", 00:14:02.387 "raid_level": "raid5f", 00:14:02.387 "superblock": true, 00:14:02.387 "num_base_bdevs": 4, 00:14:02.387 "num_base_bdevs_discovered": 2, 00:14:02.387 "num_base_bdevs_operational": 3, 00:14:02.387 "base_bdevs_list": [ 00:14:02.387 { 00:14:02.387 "name": null, 00:14:02.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.387 "is_configured": false, 00:14:02.387 "data_offset": 2048, 00:14:02.387 "data_size": 63488 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "name": "pt2", 00:14:02.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.387 "is_configured": true, 00:14:02.387 "data_offset": 2048, 00:14:02.387 "data_size": 63488 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "name": "pt3", 00:14:02.387 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.387 "is_configured": true, 00:14:02.387 "data_offset": 2048, 00:14:02.387 "data_size": 63488 00:14:02.387 }, 00:14:02.387 { 00:14:02.387 "name": null, 00:14:02.387 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:02.387 "is_configured": false, 00:14:02.387 "data_offset": 2048, 00:14:02.387 "data_size": 63488 00:14:02.387 } 00:14:02.387 ] 00:14:02.387 }' 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.387 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.647 [2024-11-27 21:46:25.694046] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:14:02.647 [2024-11-27 21:46:25.694145] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.647 [2024-11-27 21:46:25.694185] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:02.647 [2024-11-27 21:46:25.694217] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.647 [2024-11-27 21:46:25.694620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.647 [2024-11-27 21:46:25.694678] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:14:02.647 [2024-11-27 21:46:25.694782] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:14:02.647 [2024-11-27 21:46:25.694855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:14:02.647 [2024-11-27 21:46:25.694998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:14:02.647 [2024-11-27 21:46:25.695041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:02.647 [2024-11-27 21:46:25.695301] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:02.647 [2024-11-27 21:46:25.695909] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:14:02.647 [2024-11-27 21:46:25.695967] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:14:02.647 [2024-11-27 21:46:25.696232] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.647 pt4 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.647 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.648 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.648 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.648 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.648 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.648 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.648 "name": "raid_bdev1", 00:14:02.648 "uuid": "aeb1910f-1431-4b09-bcad-0bb0664e8d9e", 00:14:02.648 "strip_size_kb": 64, 00:14:02.648 "state": "online", 00:14:02.648 "raid_level": "raid5f", 00:14:02.648 "superblock": true, 00:14:02.648 "num_base_bdevs": 4, 00:14:02.648 "num_base_bdevs_discovered": 3, 00:14:02.648 "num_base_bdevs_operational": 3, 00:14:02.648 "base_bdevs_list": [ 00:14:02.648 { 00:14:02.648 "name": null, 00:14:02.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.648 "is_configured": false, 00:14:02.648 "data_offset": 2048, 00:14:02.648 "data_size": 63488 00:14:02.648 }, 00:14:02.648 { 00:14:02.648 "name": "pt2", 00:14:02.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:02.648 "is_configured": true, 00:14:02.648 "data_offset": 2048, 00:14:02.648 "data_size": 63488 00:14:02.648 }, 00:14:02.648 { 00:14:02.648 "name": "pt3", 00:14:02.648 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:02.648 "is_configured": true, 00:14:02.648 "data_offset": 2048, 00:14:02.648 "data_size": 63488 00:14:02.648 }, 00:14:02.648 { 00:14:02.648 "name": "pt4", 00:14:02.648 "uuid": "00000000-0000-0000-0000-000000000004", 00:14:02.648 "is_configured": true, 00:14:02.648 "data_offset": 2048, 00:14:02.648 "data_size": 63488 00:14:02.648 } 00:14:02.648 ] 00:14:02.648 }' 00:14:02.648 21:46:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.648 21:46:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.218 [2024-11-27 21:46:26.245393] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' aeb1910f-1431-4b09-bcad-0bb0664e8d9e '!=' aeb1910f-1431-4b09-bcad-0bb0664e8d9e ']' 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 94210 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 94210 ']' 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 94210 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94210 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94210' 00:14:03.218 killing process with pid 94210 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 94210 00:14:03.218 [2024-11-27 21:46:26.317365] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:03.218 [2024-11-27 21:46:26.317446] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:03.218 [2024-11-27 21:46:26.317522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:03.218 [2024-11-27 21:46:26.317531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:14:03.218 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 94210 00:14:03.479 [2024-11-27 21:46:26.359499] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.479 21:46:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:03.479 00:14:03.479 real 0m6.965s 00:14:03.479 user 0m11.729s 00:14:03.479 sys 0m1.493s 00:14:03.479 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.479 ************************************ 00:14:03.479 END TEST raid5f_superblock_test 00:14:03.479 ************************************ 00:14:03.479 21:46:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.739 21:46:26 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:03.739 21:46:26 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:14:03.739 21:46:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:03.739 21:46:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:03.739 21:46:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:03.740 ************************************ 00:14:03.740 START TEST raid5f_rebuild_test 00:14:03.740 ************************************ 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=94679 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 94679 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 94679 ']' 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.740 21:46:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:03.740 [2024-11-27 21:46:26.732950] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:03.740 [2024-11-27 21:46:26.733156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:03.740 Zero copy mechanism will not be used. 00:14:03.740 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94679 ] 00:14:03.999 [2024-11-27 21:46:26.876222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.999 [2024-11-27 21:46:26.901393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.999 [2024-11-27 21:46:26.943440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:03.999 [2024-11-27 21:46:26.943519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.570 BaseBdev1_malloc 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.570 [2024-11-27 21:46:27.566696] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:04.570 [2024-11-27 21:46:27.566748] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.570 [2024-11-27 21:46:27.566797] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:04.570 [2024-11-27 21:46:27.566828] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.570 [2024-11-27 21:46:27.568982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.570 [2024-11-27 21:46:27.569018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:04.570 BaseBdev1 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.570 BaseBdev2_malloc 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.570 [2024-11-27 21:46:27.594926] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:04.570 [2024-11-27 21:46:27.594975] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.570 [2024-11-27 21:46:27.594998] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:04.570 [2024-11-27 21:46:27.595007] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.570 [2024-11-27 21:46:27.597102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.570 [2024-11-27 21:46:27.597143] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:04.570 BaseBdev2 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.570 BaseBdev3_malloc 00:14:04.570 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.571 [2024-11-27 21:46:27.623426] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:04.571 [2024-11-27 21:46:27.623525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.571 [2024-11-27 21:46:27.623551] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:04.571 [2024-11-27 21:46:27.623559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.571 [2024-11-27 21:46:27.625620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.571 [2024-11-27 21:46:27.625656] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:04.571 BaseBdev3 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.571 BaseBdev4_malloc 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.571 [2024-11-27 21:46:27.670206] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:04.571 [2024-11-27 21:46:27.670303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.571 [2024-11-27 21:46:27.670353] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:04.571 [2024-11-27 21:46:27.670374] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.571 [2024-11-27 21:46:27.673709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.571 [2024-11-27 21:46:27.673843] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:04.571 BaseBdev4 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.571 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.831 spare_malloc 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.831 spare_delay 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.831 [2024-11-27 21:46:27.711408] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:04.831 [2024-11-27 21:46:27.711451] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.831 [2024-11-27 21:46:27.711485] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:04.831 [2024-11-27 21:46:27.711493] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.831 [2024-11-27 21:46:27.713534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.831 [2024-11-27 21:46:27.713614] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:04.831 spare 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.831 [2024-11-27 21:46:27.723465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.831 [2024-11-27 21:46:27.725318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.831 [2024-11-27 21:46:27.725378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.831 [2024-11-27 21:46:27.725424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:04.831 [2024-11-27 21:46:27.725516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:04.831 [2024-11-27 21:46:27.725524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:14:04.831 [2024-11-27 21:46:27.725766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:04.831 [2024-11-27 21:46:27.726209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:04.831 [2024-11-27 21:46:27.726232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:04.831 [2024-11-27 21:46:27.726356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.831 "name": "raid_bdev1", 00:14:04.831 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:04.831 "strip_size_kb": 64, 00:14:04.831 "state": "online", 00:14:04.831 "raid_level": "raid5f", 00:14:04.831 "superblock": false, 00:14:04.831 "num_base_bdevs": 4, 00:14:04.831 "num_base_bdevs_discovered": 4, 00:14:04.831 "num_base_bdevs_operational": 4, 00:14:04.831 "base_bdevs_list": [ 00:14:04.831 { 00:14:04.831 "name": "BaseBdev1", 00:14:04.831 "uuid": "92b461ed-37b3-5a83-8338-cd7df149c65b", 00:14:04.831 "is_configured": true, 00:14:04.831 "data_offset": 0, 00:14:04.831 "data_size": 65536 00:14:04.831 }, 00:14:04.831 { 00:14:04.831 "name": "BaseBdev2", 00:14:04.831 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:04.831 "is_configured": true, 00:14:04.831 "data_offset": 0, 00:14:04.831 "data_size": 65536 00:14:04.831 }, 00:14:04.831 { 00:14:04.831 "name": "BaseBdev3", 00:14:04.831 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:04.831 "is_configured": true, 00:14:04.831 "data_offset": 0, 00:14:04.831 "data_size": 65536 00:14:04.831 }, 00:14:04.831 { 00:14:04.831 "name": "BaseBdev4", 00:14:04.831 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:04.831 "is_configured": true, 00:14:04.831 "data_offset": 0, 00:14:04.831 "data_size": 65536 00:14:04.831 } 00:14:04.831 ] 00:14:04.831 }' 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.831 21:46:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.091 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.092 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.092 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.092 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:05.092 [2024-11-27 21:46:28.159641] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:05.092 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.092 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:14:05.092 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.092 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:05.092 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.092 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:05.352 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:05.352 [2024-11-27 21:46:28.431018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:05.352 /dev/nbd0 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:05.612 1+0 records in 00:14:05.612 1+0 records out 00:14:05.612 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428124 s, 9.6 MB/s 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:05.612 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:14:05.872 512+0 records in 00:14:05.872 512+0 records out 00:14:05.872 100663296 bytes (101 MB, 96 MiB) copied, 0.396351 s, 254 MB/s 00:14:05.872 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:05.872 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:05.872 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:05.872 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:05.872 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:05.872 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:05.872 21:46:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:06.132 [2024-11-27 21:46:29.115791] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.132 [2024-11-27 21:46:29.133476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.132 "name": "raid_bdev1", 00:14:06.132 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:06.132 "strip_size_kb": 64, 00:14:06.132 "state": "online", 00:14:06.132 "raid_level": "raid5f", 00:14:06.132 "superblock": false, 00:14:06.132 "num_base_bdevs": 4, 00:14:06.132 "num_base_bdevs_discovered": 3, 00:14:06.132 "num_base_bdevs_operational": 3, 00:14:06.132 "base_bdevs_list": [ 00:14:06.132 { 00:14:06.132 "name": null, 00:14:06.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.132 "is_configured": false, 00:14:06.132 "data_offset": 0, 00:14:06.132 "data_size": 65536 00:14:06.132 }, 00:14:06.132 { 00:14:06.132 "name": "BaseBdev2", 00:14:06.132 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:06.132 "is_configured": true, 00:14:06.132 "data_offset": 0, 00:14:06.132 "data_size": 65536 00:14:06.132 }, 00:14:06.132 { 00:14:06.132 "name": "BaseBdev3", 00:14:06.132 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:06.132 "is_configured": true, 00:14:06.132 "data_offset": 0, 00:14:06.132 "data_size": 65536 00:14:06.132 }, 00:14:06.132 { 00:14:06.132 "name": "BaseBdev4", 00:14:06.132 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:06.132 "is_configured": true, 00:14:06.132 "data_offset": 0, 00:14:06.132 "data_size": 65536 00:14:06.132 } 00:14:06.132 ] 00:14:06.132 }' 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.132 21:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.699 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:06.699 21:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.699 21:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.699 [2024-11-27 21:46:29.564745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.699 [2024-11-27 21:46:29.568990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027da0 00:14:06.699 21:46:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.699 21:46:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:06.699 [2024-11-27 21:46:29.571179] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.636 "name": "raid_bdev1", 00:14:07.636 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:07.636 "strip_size_kb": 64, 00:14:07.636 "state": "online", 00:14:07.636 "raid_level": "raid5f", 00:14:07.636 "superblock": false, 00:14:07.636 "num_base_bdevs": 4, 00:14:07.636 "num_base_bdevs_discovered": 4, 00:14:07.636 "num_base_bdevs_operational": 4, 00:14:07.636 "process": { 00:14:07.636 "type": "rebuild", 00:14:07.636 "target": "spare", 00:14:07.636 "progress": { 00:14:07.636 "blocks": 19200, 00:14:07.636 "percent": 9 00:14:07.636 } 00:14:07.636 }, 00:14:07.636 "base_bdevs_list": [ 00:14:07.636 { 00:14:07.636 "name": "spare", 00:14:07.636 "uuid": "d9928d59-cc09-536b-8fb4-6315ff467a23", 00:14:07.636 "is_configured": true, 00:14:07.636 "data_offset": 0, 00:14:07.636 "data_size": 65536 00:14:07.636 }, 00:14:07.636 { 00:14:07.636 "name": "BaseBdev2", 00:14:07.636 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:07.636 "is_configured": true, 00:14:07.636 "data_offset": 0, 00:14:07.636 "data_size": 65536 00:14:07.636 }, 00:14:07.636 { 00:14:07.636 "name": "BaseBdev3", 00:14:07.636 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:07.636 "is_configured": true, 00:14:07.636 "data_offset": 0, 00:14:07.636 "data_size": 65536 00:14:07.636 }, 00:14:07.636 { 00:14:07.636 "name": "BaseBdev4", 00:14:07.636 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:07.636 "is_configured": true, 00:14:07.636 "data_offset": 0, 00:14:07.636 "data_size": 65536 00:14:07.636 } 00:14:07.636 ] 00:14:07.636 }' 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.636 21:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.636 [2024-11-27 21:46:30.728192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.895 [2024-11-27 21:46:30.776872] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:07.895 [2024-11-27 21:46:30.776927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.895 [2024-11-27 21:46:30.776947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.895 [2024-11-27 21:46:30.776955] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.895 "name": "raid_bdev1", 00:14:07.895 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:07.895 "strip_size_kb": 64, 00:14:07.895 "state": "online", 00:14:07.895 "raid_level": "raid5f", 00:14:07.895 "superblock": false, 00:14:07.895 "num_base_bdevs": 4, 00:14:07.895 "num_base_bdevs_discovered": 3, 00:14:07.895 "num_base_bdevs_operational": 3, 00:14:07.895 "base_bdevs_list": [ 00:14:07.895 { 00:14:07.895 "name": null, 00:14:07.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.895 "is_configured": false, 00:14:07.895 "data_offset": 0, 00:14:07.895 "data_size": 65536 00:14:07.895 }, 00:14:07.895 { 00:14:07.895 "name": "BaseBdev2", 00:14:07.895 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:07.895 "is_configured": true, 00:14:07.895 "data_offset": 0, 00:14:07.895 "data_size": 65536 00:14:07.895 }, 00:14:07.895 { 00:14:07.895 "name": "BaseBdev3", 00:14:07.895 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:07.895 "is_configured": true, 00:14:07.895 "data_offset": 0, 00:14:07.895 "data_size": 65536 00:14:07.895 }, 00:14:07.895 { 00:14:07.895 "name": "BaseBdev4", 00:14:07.895 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:07.895 "is_configured": true, 00:14:07.895 "data_offset": 0, 00:14:07.895 "data_size": 65536 00:14:07.895 } 00:14:07.895 ] 00:14:07.895 }' 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.895 21:46:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.154 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:08.154 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.154 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:08.154 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:08.154 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.154 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.154 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.154 21:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.155 21:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.155 21:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.413 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.413 "name": "raid_bdev1", 00:14:08.413 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:08.413 "strip_size_kb": 64, 00:14:08.413 "state": "online", 00:14:08.413 "raid_level": "raid5f", 00:14:08.414 "superblock": false, 00:14:08.414 "num_base_bdevs": 4, 00:14:08.414 "num_base_bdevs_discovered": 3, 00:14:08.414 "num_base_bdevs_operational": 3, 00:14:08.414 "base_bdevs_list": [ 00:14:08.414 { 00:14:08.414 "name": null, 00:14:08.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.414 "is_configured": false, 00:14:08.414 "data_offset": 0, 00:14:08.414 "data_size": 65536 00:14:08.414 }, 00:14:08.414 { 00:14:08.414 "name": "BaseBdev2", 00:14:08.414 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:08.414 "is_configured": true, 00:14:08.414 "data_offset": 0, 00:14:08.414 "data_size": 65536 00:14:08.414 }, 00:14:08.414 { 00:14:08.414 "name": "BaseBdev3", 00:14:08.414 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:08.414 "is_configured": true, 00:14:08.414 "data_offset": 0, 00:14:08.414 "data_size": 65536 00:14:08.414 }, 00:14:08.414 { 00:14:08.414 "name": "BaseBdev4", 00:14:08.414 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:08.414 "is_configured": true, 00:14:08.414 "data_offset": 0, 00:14:08.414 "data_size": 65536 00:14:08.414 } 00:14:08.414 ] 00:14:08.414 }' 00:14:08.414 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.414 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:08.414 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.414 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:08.414 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:08.414 21:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.414 21:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.414 [2024-11-27 21:46:31.409678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.414 [2024-11-27 21:46:31.413847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027e70 00:14:08.414 21:46:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.414 21:46:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:08.414 [2024-11-27 21:46:31.415972] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:09.350 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.350 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.350 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.350 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.350 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.350 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.350 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.350 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.350 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.350 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.610 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.610 "name": "raid_bdev1", 00:14:09.610 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:09.610 "strip_size_kb": 64, 00:14:09.610 "state": "online", 00:14:09.610 "raid_level": "raid5f", 00:14:09.610 "superblock": false, 00:14:09.610 "num_base_bdevs": 4, 00:14:09.610 "num_base_bdevs_discovered": 4, 00:14:09.610 "num_base_bdevs_operational": 4, 00:14:09.610 "process": { 00:14:09.610 "type": "rebuild", 00:14:09.610 "target": "spare", 00:14:09.610 "progress": { 00:14:09.610 "blocks": 19200, 00:14:09.610 "percent": 9 00:14:09.610 } 00:14:09.610 }, 00:14:09.610 "base_bdevs_list": [ 00:14:09.610 { 00:14:09.610 "name": "spare", 00:14:09.610 "uuid": "d9928d59-cc09-536b-8fb4-6315ff467a23", 00:14:09.610 "is_configured": true, 00:14:09.610 "data_offset": 0, 00:14:09.610 "data_size": 65536 00:14:09.610 }, 00:14:09.610 { 00:14:09.610 "name": "BaseBdev2", 00:14:09.610 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:09.610 "is_configured": true, 00:14:09.610 "data_offset": 0, 00:14:09.610 "data_size": 65536 00:14:09.610 }, 00:14:09.610 { 00:14:09.610 "name": "BaseBdev3", 00:14:09.610 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:09.610 "is_configured": true, 00:14:09.610 "data_offset": 0, 00:14:09.610 "data_size": 65536 00:14:09.610 }, 00:14:09.610 { 00:14:09.610 "name": "BaseBdev4", 00:14:09.610 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:09.610 "is_configured": true, 00:14:09.611 "data_offset": 0, 00:14:09.611 "data_size": 65536 00:14:09.611 } 00:14:09.611 ] 00:14:09.611 }' 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=501 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.611 "name": "raid_bdev1", 00:14:09.611 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:09.611 "strip_size_kb": 64, 00:14:09.611 "state": "online", 00:14:09.611 "raid_level": "raid5f", 00:14:09.611 "superblock": false, 00:14:09.611 "num_base_bdevs": 4, 00:14:09.611 "num_base_bdevs_discovered": 4, 00:14:09.611 "num_base_bdevs_operational": 4, 00:14:09.611 "process": { 00:14:09.611 "type": "rebuild", 00:14:09.611 "target": "spare", 00:14:09.611 "progress": { 00:14:09.611 "blocks": 21120, 00:14:09.611 "percent": 10 00:14:09.611 } 00:14:09.611 }, 00:14:09.611 "base_bdevs_list": [ 00:14:09.611 { 00:14:09.611 "name": "spare", 00:14:09.611 "uuid": "d9928d59-cc09-536b-8fb4-6315ff467a23", 00:14:09.611 "is_configured": true, 00:14:09.611 "data_offset": 0, 00:14:09.611 "data_size": 65536 00:14:09.611 }, 00:14:09.611 { 00:14:09.611 "name": "BaseBdev2", 00:14:09.611 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:09.611 "is_configured": true, 00:14:09.611 "data_offset": 0, 00:14:09.611 "data_size": 65536 00:14:09.611 }, 00:14:09.611 { 00:14:09.611 "name": "BaseBdev3", 00:14:09.611 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:09.611 "is_configured": true, 00:14:09.611 "data_offset": 0, 00:14:09.611 "data_size": 65536 00:14:09.611 }, 00:14:09.611 { 00:14:09.611 "name": "BaseBdev4", 00:14:09.611 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:09.611 "is_configured": true, 00:14:09.611 "data_offset": 0, 00:14:09.611 "data_size": 65536 00:14:09.611 } 00:14:09.611 ] 00:14:09.611 }' 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.611 21:46:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:10.991 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:10.991 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.991 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.991 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.991 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.991 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.991 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.991 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.991 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.991 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.991 21:46:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.991 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.991 "name": "raid_bdev1", 00:14:10.991 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:10.991 "strip_size_kb": 64, 00:14:10.991 "state": "online", 00:14:10.991 "raid_level": "raid5f", 00:14:10.991 "superblock": false, 00:14:10.991 "num_base_bdevs": 4, 00:14:10.991 "num_base_bdevs_discovered": 4, 00:14:10.991 "num_base_bdevs_operational": 4, 00:14:10.991 "process": { 00:14:10.991 "type": "rebuild", 00:14:10.991 "target": "spare", 00:14:10.991 "progress": { 00:14:10.991 "blocks": 42240, 00:14:10.991 "percent": 21 00:14:10.991 } 00:14:10.991 }, 00:14:10.991 "base_bdevs_list": [ 00:14:10.991 { 00:14:10.991 "name": "spare", 00:14:10.991 "uuid": "d9928d59-cc09-536b-8fb4-6315ff467a23", 00:14:10.991 "is_configured": true, 00:14:10.991 "data_offset": 0, 00:14:10.991 "data_size": 65536 00:14:10.991 }, 00:14:10.991 { 00:14:10.991 "name": "BaseBdev2", 00:14:10.991 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:10.991 "is_configured": true, 00:14:10.991 "data_offset": 0, 00:14:10.991 "data_size": 65536 00:14:10.991 }, 00:14:10.991 { 00:14:10.992 "name": "BaseBdev3", 00:14:10.992 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:10.992 "is_configured": true, 00:14:10.992 "data_offset": 0, 00:14:10.992 "data_size": 65536 00:14:10.992 }, 00:14:10.992 { 00:14:10.992 "name": "BaseBdev4", 00:14:10.992 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:10.992 "is_configured": true, 00:14:10.992 "data_offset": 0, 00:14:10.992 "data_size": 65536 00:14:10.992 } 00:14:10.992 ] 00:14:10.992 }' 00:14:10.992 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.992 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.992 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.992 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.992 21:46:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:11.931 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:11.931 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.931 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.931 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.931 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.931 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.931 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.931 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.931 21:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.931 21:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.931 21:46:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.931 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.931 "name": "raid_bdev1", 00:14:11.931 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:11.931 "strip_size_kb": 64, 00:14:11.931 "state": "online", 00:14:11.931 "raid_level": "raid5f", 00:14:11.931 "superblock": false, 00:14:11.931 "num_base_bdevs": 4, 00:14:11.931 "num_base_bdevs_discovered": 4, 00:14:11.931 "num_base_bdevs_operational": 4, 00:14:11.931 "process": { 00:14:11.931 "type": "rebuild", 00:14:11.931 "target": "spare", 00:14:11.931 "progress": { 00:14:11.931 "blocks": 65280, 00:14:11.931 "percent": 33 00:14:11.931 } 00:14:11.931 }, 00:14:11.931 "base_bdevs_list": [ 00:14:11.931 { 00:14:11.931 "name": "spare", 00:14:11.931 "uuid": "d9928d59-cc09-536b-8fb4-6315ff467a23", 00:14:11.931 "is_configured": true, 00:14:11.931 "data_offset": 0, 00:14:11.931 "data_size": 65536 00:14:11.931 }, 00:14:11.931 { 00:14:11.931 "name": "BaseBdev2", 00:14:11.931 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:11.931 "is_configured": true, 00:14:11.932 "data_offset": 0, 00:14:11.932 "data_size": 65536 00:14:11.932 }, 00:14:11.932 { 00:14:11.932 "name": "BaseBdev3", 00:14:11.932 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:11.932 "is_configured": true, 00:14:11.932 "data_offset": 0, 00:14:11.932 "data_size": 65536 00:14:11.932 }, 00:14:11.932 { 00:14:11.932 "name": "BaseBdev4", 00:14:11.932 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:11.932 "is_configured": true, 00:14:11.932 "data_offset": 0, 00:14:11.932 "data_size": 65536 00:14:11.932 } 00:14:11.932 ] 00:14:11.932 }' 00:14:11.932 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.932 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.932 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.932 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.932 21:46:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:12.873 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:12.873 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.873 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.873 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.873 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.873 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.873 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.873 21:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.873 21:46:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.873 21:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.873 21:46:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.133 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.133 "name": "raid_bdev1", 00:14:13.133 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:13.133 "strip_size_kb": 64, 00:14:13.133 "state": "online", 00:14:13.133 "raid_level": "raid5f", 00:14:13.133 "superblock": false, 00:14:13.133 "num_base_bdevs": 4, 00:14:13.133 "num_base_bdevs_discovered": 4, 00:14:13.133 "num_base_bdevs_operational": 4, 00:14:13.133 "process": { 00:14:13.133 "type": "rebuild", 00:14:13.133 "target": "spare", 00:14:13.133 "progress": { 00:14:13.133 "blocks": 86400, 00:14:13.133 "percent": 43 00:14:13.133 } 00:14:13.133 }, 00:14:13.133 "base_bdevs_list": [ 00:14:13.133 { 00:14:13.133 "name": "spare", 00:14:13.133 "uuid": "d9928d59-cc09-536b-8fb4-6315ff467a23", 00:14:13.133 "is_configured": true, 00:14:13.133 "data_offset": 0, 00:14:13.133 "data_size": 65536 00:14:13.133 }, 00:14:13.133 { 00:14:13.133 "name": "BaseBdev2", 00:14:13.133 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:13.133 "is_configured": true, 00:14:13.133 "data_offset": 0, 00:14:13.133 "data_size": 65536 00:14:13.133 }, 00:14:13.133 { 00:14:13.133 "name": "BaseBdev3", 00:14:13.133 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:13.133 "is_configured": true, 00:14:13.133 "data_offset": 0, 00:14:13.133 "data_size": 65536 00:14:13.133 }, 00:14:13.133 { 00:14:13.133 "name": "BaseBdev4", 00:14:13.133 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:13.133 "is_configured": true, 00:14:13.133 "data_offset": 0, 00:14:13.133 "data_size": 65536 00:14:13.133 } 00:14:13.133 ] 00:14:13.133 }' 00:14:13.133 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.133 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.133 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.133 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.133 21:46:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:14.072 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:14.072 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.072 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.072 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.072 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.072 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.072 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.072 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.072 21:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.072 21:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.072 21:46:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.072 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.072 "name": "raid_bdev1", 00:14:14.072 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:14.072 "strip_size_kb": 64, 00:14:14.072 "state": "online", 00:14:14.072 "raid_level": "raid5f", 00:14:14.072 "superblock": false, 00:14:14.072 "num_base_bdevs": 4, 00:14:14.072 "num_base_bdevs_discovered": 4, 00:14:14.072 "num_base_bdevs_operational": 4, 00:14:14.072 "process": { 00:14:14.072 "type": "rebuild", 00:14:14.072 "target": "spare", 00:14:14.072 "progress": { 00:14:14.072 "blocks": 107520, 00:14:14.072 "percent": 54 00:14:14.072 } 00:14:14.072 }, 00:14:14.072 "base_bdevs_list": [ 00:14:14.072 { 00:14:14.072 "name": "spare", 00:14:14.072 "uuid": "d9928d59-cc09-536b-8fb4-6315ff467a23", 00:14:14.072 "is_configured": true, 00:14:14.072 "data_offset": 0, 00:14:14.072 "data_size": 65536 00:14:14.072 }, 00:14:14.072 { 00:14:14.072 "name": "BaseBdev2", 00:14:14.072 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:14.072 "is_configured": true, 00:14:14.072 "data_offset": 0, 00:14:14.072 "data_size": 65536 00:14:14.072 }, 00:14:14.072 { 00:14:14.072 "name": "BaseBdev3", 00:14:14.072 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:14.072 "is_configured": true, 00:14:14.072 "data_offset": 0, 00:14:14.072 "data_size": 65536 00:14:14.073 }, 00:14:14.073 { 00:14:14.073 "name": "BaseBdev4", 00:14:14.073 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:14.073 "is_configured": true, 00:14:14.073 "data_offset": 0, 00:14:14.073 "data_size": 65536 00:14:14.073 } 00:14:14.073 ] 00:14:14.073 }' 00:14:14.073 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.073 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.073 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.332 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.332 21:46:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.269 "name": "raid_bdev1", 00:14:15.269 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:15.269 "strip_size_kb": 64, 00:14:15.269 "state": "online", 00:14:15.269 "raid_level": "raid5f", 00:14:15.269 "superblock": false, 00:14:15.269 "num_base_bdevs": 4, 00:14:15.269 "num_base_bdevs_discovered": 4, 00:14:15.269 "num_base_bdevs_operational": 4, 00:14:15.269 "process": { 00:14:15.269 "type": "rebuild", 00:14:15.269 "target": "spare", 00:14:15.269 "progress": { 00:14:15.269 "blocks": 128640, 00:14:15.269 "percent": 65 00:14:15.269 } 00:14:15.269 }, 00:14:15.269 "base_bdevs_list": [ 00:14:15.269 { 00:14:15.269 "name": "spare", 00:14:15.269 "uuid": "d9928d59-cc09-536b-8fb4-6315ff467a23", 00:14:15.269 "is_configured": true, 00:14:15.269 "data_offset": 0, 00:14:15.269 "data_size": 65536 00:14:15.269 }, 00:14:15.269 { 00:14:15.269 "name": "BaseBdev2", 00:14:15.269 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:15.269 "is_configured": true, 00:14:15.269 "data_offset": 0, 00:14:15.269 "data_size": 65536 00:14:15.269 }, 00:14:15.269 { 00:14:15.269 "name": "BaseBdev3", 00:14:15.269 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:15.269 "is_configured": true, 00:14:15.269 "data_offset": 0, 00:14:15.269 "data_size": 65536 00:14:15.269 }, 00:14:15.269 { 00:14:15.269 "name": "BaseBdev4", 00:14:15.269 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:15.269 "is_configured": true, 00:14:15.269 "data_offset": 0, 00:14:15.269 "data_size": 65536 00:14:15.269 } 00:14:15.269 ] 00:14:15.269 }' 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.269 21:46:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.661 "name": "raid_bdev1", 00:14:16.661 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:16.661 "strip_size_kb": 64, 00:14:16.661 "state": "online", 00:14:16.661 "raid_level": "raid5f", 00:14:16.661 "superblock": false, 00:14:16.661 "num_base_bdevs": 4, 00:14:16.661 "num_base_bdevs_discovered": 4, 00:14:16.661 "num_base_bdevs_operational": 4, 00:14:16.661 "process": { 00:14:16.661 "type": "rebuild", 00:14:16.661 "target": "spare", 00:14:16.661 "progress": { 00:14:16.661 "blocks": 151680, 00:14:16.661 "percent": 77 00:14:16.661 } 00:14:16.661 }, 00:14:16.661 "base_bdevs_list": [ 00:14:16.661 { 00:14:16.661 "name": "spare", 00:14:16.661 "uuid": "d9928d59-cc09-536b-8fb4-6315ff467a23", 00:14:16.661 "is_configured": true, 00:14:16.661 "data_offset": 0, 00:14:16.661 "data_size": 65536 00:14:16.661 }, 00:14:16.661 { 00:14:16.661 "name": "BaseBdev2", 00:14:16.661 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:16.661 "is_configured": true, 00:14:16.661 "data_offset": 0, 00:14:16.661 "data_size": 65536 00:14:16.661 }, 00:14:16.661 { 00:14:16.661 "name": "BaseBdev3", 00:14:16.661 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:16.661 "is_configured": true, 00:14:16.661 "data_offset": 0, 00:14:16.661 "data_size": 65536 00:14:16.661 }, 00:14:16.661 { 00:14:16.661 "name": "BaseBdev4", 00:14:16.661 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:16.661 "is_configured": true, 00:14:16.661 "data_offset": 0, 00:14:16.661 "data_size": 65536 00:14:16.661 } 00:14:16.661 ] 00:14:16.661 }' 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.661 21:46:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.613 "name": "raid_bdev1", 00:14:17.613 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:17.613 "strip_size_kb": 64, 00:14:17.613 "state": "online", 00:14:17.613 "raid_level": "raid5f", 00:14:17.613 "superblock": false, 00:14:17.613 "num_base_bdevs": 4, 00:14:17.613 "num_base_bdevs_discovered": 4, 00:14:17.613 "num_base_bdevs_operational": 4, 00:14:17.613 "process": { 00:14:17.613 "type": "rebuild", 00:14:17.613 "target": "spare", 00:14:17.613 "progress": { 00:14:17.613 "blocks": 172800, 00:14:17.613 "percent": 87 00:14:17.613 } 00:14:17.613 }, 00:14:17.613 "base_bdevs_list": [ 00:14:17.613 { 00:14:17.613 "name": "spare", 00:14:17.613 "uuid": "d9928d59-cc09-536b-8fb4-6315ff467a23", 00:14:17.613 "is_configured": true, 00:14:17.613 "data_offset": 0, 00:14:17.613 "data_size": 65536 00:14:17.613 }, 00:14:17.613 { 00:14:17.613 "name": "BaseBdev2", 00:14:17.613 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:17.613 "is_configured": true, 00:14:17.613 "data_offset": 0, 00:14:17.613 "data_size": 65536 00:14:17.613 }, 00:14:17.613 { 00:14:17.613 "name": "BaseBdev3", 00:14:17.613 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:17.613 "is_configured": true, 00:14:17.613 "data_offset": 0, 00:14:17.613 "data_size": 65536 00:14:17.613 }, 00:14:17.613 { 00:14:17.613 "name": "BaseBdev4", 00:14:17.613 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:17.613 "is_configured": true, 00:14:17.613 "data_offset": 0, 00:14:17.613 "data_size": 65536 00:14:17.613 } 00:14:17.613 ] 00:14:17.613 }' 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.613 21:46:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:18.548 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:18.548 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:18.548 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:18.548 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:18.548 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:18.548 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:18.548 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.548 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.548 21:46:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.548 21:46:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.808 21:46:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.808 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.808 "name": "raid_bdev1", 00:14:18.808 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:18.808 "strip_size_kb": 64, 00:14:18.808 "state": "online", 00:14:18.808 "raid_level": "raid5f", 00:14:18.808 "superblock": false, 00:14:18.808 "num_base_bdevs": 4, 00:14:18.808 "num_base_bdevs_discovered": 4, 00:14:18.808 "num_base_bdevs_operational": 4, 00:14:18.808 "process": { 00:14:18.808 "type": "rebuild", 00:14:18.808 "target": "spare", 00:14:18.808 "progress": { 00:14:18.808 "blocks": 195840, 00:14:18.808 "percent": 99 00:14:18.808 } 00:14:18.808 }, 00:14:18.808 "base_bdevs_list": [ 00:14:18.808 { 00:14:18.808 "name": "spare", 00:14:18.808 "uuid": "d9928d59-cc09-536b-8fb4-6315ff467a23", 00:14:18.808 "is_configured": true, 00:14:18.808 "data_offset": 0, 00:14:18.808 "data_size": 65536 00:14:18.808 }, 00:14:18.808 { 00:14:18.808 "name": "BaseBdev2", 00:14:18.808 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:18.808 "is_configured": true, 00:14:18.808 "data_offset": 0, 00:14:18.808 "data_size": 65536 00:14:18.808 }, 00:14:18.808 { 00:14:18.808 "name": "BaseBdev3", 00:14:18.808 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:18.808 "is_configured": true, 00:14:18.808 "data_offset": 0, 00:14:18.808 "data_size": 65536 00:14:18.808 }, 00:14:18.808 { 00:14:18.808 "name": "BaseBdev4", 00:14:18.808 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:18.808 "is_configured": true, 00:14:18.808 "data_offset": 0, 00:14:18.808 "data_size": 65536 00:14:18.808 } 00:14:18.808 ] 00:14:18.808 }' 00:14:18.808 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.808 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.808 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.808 [2024-11-27 21:46:41.761143] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:18.808 [2024-11-27 21:46:41.761281] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:18.808 [2024-11-27 21:46:41.761345] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.808 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.808 21:46:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.746 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.746 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.746 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.746 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.746 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.746 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.746 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.746 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.746 21:46:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.746 21:46:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.746 21:46:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.746 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.746 "name": "raid_bdev1", 00:14:19.746 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:19.746 "strip_size_kb": 64, 00:14:19.746 "state": "online", 00:14:19.746 "raid_level": "raid5f", 00:14:19.746 "superblock": false, 00:14:19.746 "num_base_bdevs": 4, 00:14:19.746 "num_base_bdevs_discovered": 4, 00:14:19.746 "num_base_bdevs_operational": 4, 00:14:19.746 "base_bdevs_list": [ 00:14:19.746 { 00:14:19.746 "name": "spare", 00:14:19.746 "uuid": "d9928d59-cc09-536b-8fb4-6315ff467a23", 00:14:19.746 "is_configured": true, 00:14:19.746 "data_offset": 0, 00:14:19.746 "data_size": 65536 00:14:19.746 }, 00:14:19.746 { 00:14:19.746 "name": "BaseBdev2", 00:14:19.746 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:19.746 "is_configured": true, 00:14:19.746 "data_offset": 0, 00:14:19.746 "data_size": 65536 00:14:19.746 }, 00:14:19.746 { 00:14:19.746 "name": "BaseBdev3", 00:14:19.746 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:19.746 "is_configured": true, 00:14:19.746 "data_offset": 0, 00:14:19.746 "data_size": 65536 00:14:19.746 }, 00:14:19.746 { 00:14:19.746 "name": "BaseBdev4", 00:14:19.746 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:19.746 "is_configured": true, 00:14:19.746 "data_offset": 0, 00:14:19.746 "data_size": 65536 00:14:19.746 } 00:14:19.746 ] 00:14:19.746 }' 00:14:19.746 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.007 "name": "raid_bdev1", 00:14:20.007 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:20.007 "strip_size_kb": 64, 00:14:20.007 "state": "online", 00:14:20.007 "raid_level": "raid5f", 00:14:20.007 "superblock": false, 00:14:20.007 "num_base_bdevs": 4, 00:14:20.007 "num_base_bdevs_discovered": 4, 00:14:20.007 "num_base_bdevs_operational": 4, 00:14:20.007 "base_bdevs_list": [ 00:14:20.007 { 00:14:20.007 "name": "spare", 00:14:20.007 "uuid": "d9928d59-cc09-536b-8fb4-6315ff467a23", 00:14:20.007 "is_configured": true, 00:14:20.007 "data_offset": 0, 00:14:20.007 "data_size": 65536 00:14:20.007 }, 00:14:20.007 { 00:14:20.007 "name": "BaseBdev2", 00:14:20.007 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:20.007 "is_configured": true, 00:14:20.007 "data_offset": 0, 00:14:20.007 "data_size": 65536 00:14:20.007 }, 00:14:20.007 { 00:14:20.007 "name": "BaseBdev3", 00:14:20.007 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:20.007 "is_configured": true, 00:14:20.007 "data_offset": 0, 00:14:20.007 "data_size": 65536 00:14:20.007 }, 00:14:20.007 { 00:14:20.007 "name": "BaseBdev4", 00:14:20.007 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:20.007 "is_configured": true, 00:14:20.007 "data_offset": 0, 00:14:20.007 "data_size": 65536 00:14:20.007 } 00:14:20.007 ] 00:14:20.007 }' 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.007 21:46:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.007 "name": "raid_bdev1", 00:14:20.007 "uuid": "cf730b24-081e-4087-bb01-b9d38c9579c9", 00:14:20.007 "strip_size_kb": 64, 00:14:20.007 "state": "online", 00:14:20.007 "raid_level": "raid5f", 00:14:20.007 "superblock": false, 00:14:20.007 "num_base_bdevs": 4, 00:14:20.007 "num_base_bdevs_discovered": 4, 00:14:20.007 "num_base_bdevs_operational": 4, 00:14:20.007 "base_bdevs_list": [ 00:14:20.007 { 00:14:20.007 "name": "spare", 00:14:20.007 "uuid": "d9928d59-cc09-536b-8fb4-6315ff467a23", 00:14:20.007 "is_configured": true, 00:14:20.007 "data_offset": 0, 00:14:20.007 "data_size": 65536 00:14:20.007 }, 00:14:20.007 { 00:14:20.007 "name": "BaseBdev2", 00:14:20.007 "uuid": "34ed6c7b-0090-5a8f-80d3-587df4285817", 00:14:20.007 "is_configured": true, 00:14:20.007 "data_offset": 0, 00:14:20.007 "data_size": 65536 00:14:20.007 }, 00:14:20.007 { 00:14:20.007 "name": "BaseBdev3", 00:14:20.007 "uuid": "813e31c8-9302-5119-b910-c9c984b72e00", 00:14:20.007 "is_configured": true, 00:14:20.007 "data_offset": 0, 00:14:20.007 "data_size": 65536 00:14:20.007 }, 00:14:20.007 { 00:14:20.007 "name": "BaseBdev4", 00:14:20.007 "uuid": "d552ccde-3074-5428-86ba-9fefb601b7c8", 00:14:20.007 "is_configured": true, 00:14:20.007 "data_offset": 0, 00:14:20.007 "data_size": 65536 00:14:20.007 } 00:14:20.007 ] 00:14:20.007 }' 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.007 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.577 [2024-11-27 21:46:43.459997] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:20.577 [2024-11-27 21:46:43.460079] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:20.577 [2024-11-27 21:46:43.460217] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.577 [2024-11-27 21:46:43.460358] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:20.577 [2024-11-27 21:46:43.460424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:20.577 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:20.837 /dev/nbd0 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.838 1+0 records in 00:14:20.838 1+0 records out 00:14:20.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384774 s, 10.6 MB/s 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:20.838 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:21.097 /dev/nbd1 00:14:21.097 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:21.097 21:46:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:21.097 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:21.097 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:21.097 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:21.097 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:21.097 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:21.097 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:21.097 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:21.097 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:21.097 21:46:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:21.097 1+0 records in 00:14:21.097 1+0 records out 00:14:21.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394239 s, 10.4 MB/s 00:14:21.097 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.097 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:21.097 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:21.097 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:21.097 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:21.097 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:21.097 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:21.097 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:21.098 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:21.098 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:21.098 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:21.098 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:21.098 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:21.098 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.098 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:21.358 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:21.358 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:21.358 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:21.358 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.358 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.358 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:21.358 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:21.358 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.358 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.358 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 94679 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 94679 ']' 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 94679 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94679 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.618 killing process with pid 94679 00:14:21.618 Received shutdown signal, test time was about 60.000000 seconds 00:14:21.618 00:14:21.618 Latency(us) 00:14:21.618 [2024-11-27T21:46:44.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.618 [2024-11-27T21:46:44.739Z] =================================================================================================================== 00:14:21.618 [2024-11-27T21:46:44.739Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94679' 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 94679 00:14:21.618 [2024-11-27 21:46:44.553331] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:21.618 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 94679 00:14:21.618 [2024-11-27 21:46:44.600882] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:21.878 00:14:21.878 real 0m18.157s 00:14:21.878 user 0m21.885s 00:14:21.878 sys 0m2.213s 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 ************************************ 00:14:21.878 END TEST raid5f_rebuild_test 00:14:21.878 ************************************ 00:14:21.878 21:46:44 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:14:21.878 21:46:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:21.878 21:46:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:21.878 21:46:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 ************************************ 00:14:21.878 START TEST raid5f_rebuild_test_sb 00:14:21.878 ************************************ 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=95184 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 95184 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 95184 ']' 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:21.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:21.878 21:46:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.878 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:21.878 Zero copy mechanism will not be used. 00:14:21.878 [2024-11-27 21:46:44.966448] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:21.878 [2024-11-27 21:46:44.966556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95184 ] 00:14:22.138 [2024-11-27 21:46:45.118353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.138 [2024-11-27 21:46:45.142306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.138 [2024-11-27 21:46:45.183391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.138 [2024-11-27 21:46:45.183433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.707 BaseBdev1_malloc 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.707 [2024-11-27 21:46:45.805940] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:22.707 [2024-11-27 21:46:45.806056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.707 [2024-11-27 21:46:45.806086] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:22.707 [2024-11-27 21:46:45.806097] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.707 [2024-11-27 21:46:45.808168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.707 [2024-11-27 21:46:45.808202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:22.707 BaseBdev1 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.707 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.967 BaseBdev2_malloc 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.967 [2024-11-27 21:46:45.834417] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:22.967 [2024-11-27 21:46:45.834466] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.967 [2024-11-27 21:46:45.834489] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:22.967 [2024-11-27 21:46:45.834497] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.967 [2024-11-27 21:46:45.836669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.967 [2024-11-27 21:46:45.836711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:22.967 BaseBdev2 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.967 BaseBdev3_malloc 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.967 [2024-11-27 21:46:45.862929] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:22.967 [2024-11-27 21:46:45.862981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.967 [2024-11-27 21:46:45.863003] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:22.967 [2024-11-27 21:46:45.863011] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.967 [2024-11-27 21:46:45.865043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.967 [2024-11-27 21:46:45.865077] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:22.967 BaseBdev3 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.967 BaseBdev4_malloc 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.967 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.967 [2024-11-27 21:46:45.908390] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:22.967 [2024-11-27 21:46:45.908481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.967 [2024-11-27 21:46:45.908526] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:22.967 [2024-11-27 21:46:45.908546] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.967 [2024-11-27 21:46:45.912823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.968 [2024-11-27 21:46:45.912868] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:22.968 BaseBdev4 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.968 spare_malloc 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.968 spare_delay 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.968 [2024-11-27 21:46:45.949981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:22.968 [2024-11-27 21:46:45.950021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.968 [2024-11-27 21:46:45.950054] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:22.968 [2024-11-27 21:46:45.950062] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.968 [2024-11-27 21:46:45.952073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.968 [2024-11-27 21:46:45.952106] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:22.968 spare 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.968 [2024-11-27 21:46:45.962037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.968 [2024-11-27 21:46:45.963891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.968 [2024-11-27 21:46:45.963950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.968 [2024-11-27 21:46:45.963994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:22.968 [2024-11-27 21:46:45.964185] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:22.968 [2024-11-27 21:46:45.964202] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:22.968 [2024-11-27 21:46:45.964454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:22.968 [2024-11-27 21:46:45.964912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:22.968 [2024-11-27 21:46:45.964927] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:22.968 [2024-11-27 21:46:45.965036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.968 21:46:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.968 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.968 "name": "raid_bdev1", 00:14:22.968 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:22.968 "strip_size_kb": 64, 00:14:22.968 "state": "online", 00:14:22.968 "raid_level": "raid5f", 00:14:22.968 "superblock": true, 00:14:22.968 "num_base_bdevs": 4, 00:14:22.968 "num_base_bdevs_discovered": 4, 00:14:22.968 "num_base_bdevs_operational": 4, 00:14:22.968 "base_bdevs_list": [ 00:14:22.968 { 00:14:22.968 "name": "BaseBdev1", 00:14:22.968 "uuid": "e7ea9c70-0ea0-5813-9f97-13e627029f9a", 00:14:22.968 "is_configured": true, 00:14:22.968 "data_offset": 2048, 00:14:22.968 "data_size": 63488 00:14:22.968 }, 00:14:22.968 { 00:14:22.968 "name": "BaseBdev2", 00:14:22.968 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:22.968 "is_configured": true, 00:14:22.968 "data_offset": 2048, 00:14:22.968 "data_size": 63488 00:14:22.968 }, 00:14:22.968 { 00:14:22.968 "name": "BaseBdev3", 00:14:22.968 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:22.968 "is_configured": true, 00:14:22.968 "data_offset": 2048, 00:14:22.968 "data_size": 63488 00:14:22.968 }, 00:14:22.968 { 00:14:22.968 "name": "BaseBdev4", 00:14:22.968 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:22.968 "is_configured": true, 00:14:22.968 "data_offset": 2048, 00:14:22.968 "data_size": 63488 00:14:22.968 } 00:14:22.968 ] 00:14:22.968 }' 00:14:22.968 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.968 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.537 [2024-11-27 21:46:46.382193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.537 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:23.537 [2024-11-27 21:46:46.653629] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:14:23.797 /dev/nbd0 00:14:23.797 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:23.797 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:23.797 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:23.797 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:23.798 1+0 records in 00:14:23.798 1+0 records out 00:14:23.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347411 s, 11.8 MB/s 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:14:23.798 21:46:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:14:24.057 496+0 records in 00:14:24.057 496+0 records out 00:14:24.057 97517568 bytes (98 MB, 93 MiB) copied, 0.391281 s, 249 MB/s 00:14:24.057 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:24.057 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.057 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:24.057 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:24.057 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:24.057 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:24.057 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:24.316 [2024-11-27 21:46:47.326894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.316 [2024-11-27 21:46:47.345860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.316 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.316 "name": "raid_bdev1", 00:14:24.316 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:24.316 "strip_size_kb": 64, 00:14:24.316 "state": "online", 00:14:24.316 "raid_level": "raid5f", 00:14:24.316 "superblock": true, 00:14:24.316 "num_base_bdevs": 4, 00:14:24.316 "num_base_bdevs_discovered": 3, 00:14:24.316 "num_base_bdevs_operational": 3, 00:14:24.316 "base_bdevs_list": [ 00:14:24.316 { 00:14:24.316 "name": null, 00:14:24.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.316 "is_configured": false, 00:14:24.316 "data_offset": 0, 00:14:24.316 "data_size": 63488 00:14:24.316 }, 00:14:24.316 { 00:14:24.316 "name": "BaseBdev2", 00:14:24.316 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:24.316 "is_configured": true, 00:14:24.316 "data_offset": 2048, 00:14:24.316 "data_size": 63488 00:14:24.316 }, 00:14:24.316 { 00:14:24.316 "name": "BaseBdev3", 00:14:24.316 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:24.316 "is_configured": true, 00:14:24.316 "data_offset": 2048, 00:14:24.316 "data_size": 63488 00:14:24.316 }, 00:14:24.316 { 00:14:24.317 "name": "BaseBdev4", 00:14:24.317 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:24.317 "is_configured": true, 00:14:24.317 "data_offset": 2048, 00:14:24.317 "data_size": 63488 00:14:24.317 } 00:14:24.317 ] 00:14:24.317 }' 00:14:24.317 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.317 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.885 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:24.885 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.885 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.885 [2024-11-27 21:46:47.761203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.886 [2024-11-27 21:46:47.765591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000270a0 00:14:24.886 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.886 21:46:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:24.886 [2024-11-27 21:46:47.767987] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:25.838 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.838 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.838 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.838 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.838 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.838 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.838 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.839 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.839 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.839 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.839 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:25.839 "name": "raid_bdev1", 00:14:25.839 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:25.839 "strip_size_kb": 64, 00:14:25.839 "state": "online", 00:14:25.839 "raid_level": "raid5f", 00:14:25.839 "superblock": true, 00:14:25.839 "num_base_bdevs": 4, 00:14:25.839 "num_base_bdevs_discovered": 4, 00:14:25.839 "num_base_bdevs_operational": 4, 00:14:25.839 "process": { 00:14:25.839 "type": "rebuild", 00:14:25.839 "target": "spare", 00:14:25.839 "progress": { 00:14:25.839 "blocks": 19200, 00:14:25.839 "percent": 10 00:14:25.839 } 00:14:25.839 }, 00:14:25.839 "base_bdevs_list": [ 00:14:25.839 { 00:14:25.839 "name": "spare", 00:14:25.839 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:25.839 "is_configured": true, 00:14:25.839 "data_offset": 2048, 00:14:25.839 "data_size": 63488 00:14:25.839 }, 00:14:25.839 { 00:14:25.839 "name": "BaseBdev2", 00:14:25.839 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:25.839 "is_configured": true, 00:14:25.839 "data_offset": 2048, 00:14:25.839 "data_size": 63488 00:14:25.839 }, 00:14:25.839 { 00:14:25.839 "name": "BaseBdev3", 00:14:25.839 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:25.839 "is_configured": true, 00:14:25.839 "data_offset": 2048, 00:14:25.839 "data_size": 63488 00:14:25.839 }, 00:14:25.839 { 00:14:25.839 "name": "BaseBdev4", 00:14:25.839 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:25.839 "is_configured": true, 00:14:25.839 "data_offset": 2048, 00:14:25.839 "data_size": 63488 00:14:25.839 } 00:14:25.839 ] 00:14:25.839 }' 00:14:25.839 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.839 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.839 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.839 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.839 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:25.839 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.839 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.839 [2024-11-27 21:46:48.932402] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.098 [2024-11-27 21:46:48.974549] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:26.098 [2024-11-27 21:46:48.974637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.098 [2024-11-27 21:46:48.974656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:26.098 [2024-11-27 21:46:48.974664] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:26.098 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.098 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:26.098 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:26.098 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.098 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.098 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.098 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.098 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.098 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.098 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.098 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.098 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.099 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.099 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.099 21:46:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.099 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.099 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.099 "name": "raid_bdev1", 00:14:26.099 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:26.099 "strip_size_kb": 64, 00:14:26.099 "state": "online", 00:14:26.099 "raid_level": "raid5f", 00:14:26.099 "superblock": true, 00:14:26.099 "num_base_bdevs": 4, 00:14:26.099 "num_base_bdevs_discovered": 3, 00:14:26.099 "num_base_bdevs_operational": 3, 00:14:26.099 "base_bdevs_list": [ 00:14:26.099 { 00:14:26.099 "name": null, 00:14:26.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.099 "is_configured": false, 00:14:26.099 "data_offset": 0, 00:14:26.099 "data_size": 63488 00:14:26.099 }, 00:14:26.099 { 00:14:26.099 "name": "BaseBdev2", 00:14:26.099 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:26.099 "is_configured": true, 00:14:26.099 "data_offset": 2048, 00:14:26.099 "data_size": 63488 00:14:26.099 }, 00:14:26.099 { 00:14:26.099 "name": "BaseBdev3", 00:14:26.099 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:26.099 "is_configured": true, 00:14:26.099 "data_offset": 2048, 00:14:26.099 "data_size": 63488 00:14:26.099 }, 00:14:26.099 { 00:14:26.099 "name": "BaseBdev4", 00:14:26.099 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:26.099 "is_configured": true, 00:14:26.099 "data_offset": 2048, 00:14:26.099 "data_size": 63488 00:14:26.099 } 00:14:26.099 ] 00:14:26.099 }' 00:14:26.099 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.099 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.357 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:26.357 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.357 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:26.357 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:26.357 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.357 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.357 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.357 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.357 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.357 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.617 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.617 "name": "raid_bdev1", 00:14:26.617 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:26.617 "strip_size_kb": 64, 00:14:26.617 "state": "online", 00:14:26.617 "raid_level": "raid5f", 00:14:26.617 "superblock": true, 00:14:26.617 "num_base_bdevs": 4, 00:14:26.617 "num_base_bdevs_discovered": 3, 00:14:26.617 "num_base_bdevs_operational": 3, 00:14:26.617 "base_bdevs_list": [ 00:14:26.617 { 00:14:26.617 "name": null, 00:14:26.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.617 "is_configured": false, 00:14:26.617 "data_offset": 0, 00:14:26.617 "data_size": 63488 00:14:26.617 }, 00:14:26.617 { 00:14:26.617 "name": "BaseBdev2", 00:14:26.617 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:26.617 "is_configured": true, 00:14:26.617 "data_offset": 2048, 00:14:26.617 "data_size": 63488 00:14:26.617 }, 00:14:26.617 { 00:14:26.617 "name": "BaseBdev3", 00:14:26.617 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:26.617 "is_configured": true, 00:14:26.617 "data_offset": 2048, 00:14:26.617 "data_size": 63488 00:14:26.617 }, 00:14:26.617 { 00:14:26.617 "name": "BaseBdev4", 00:14:26.617 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:26.617 "is_configured": true, 00:14:26.617 "data_offset": 2048, 00:14:26.617 "data_size": 63488 00:14:26.617 } 00:14:26.617 ] 00:14:26.617 }' 00:14:26.617 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.617 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:26.617 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.617 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:26.617 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.617 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.617 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.617 [2024-11-27 21:46:49.571646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.617 [2024-11-27 21:46:49.575869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000027170 00:14:26.617 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.617 21:46:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:26.617 [2024-11-27 21:46:49.578098] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.571 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.571 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.571 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.571 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.571 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.571 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.571 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.571 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.571 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.571 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.571 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.571 "name": "raid_bdev1", 00:14:27.571 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:27.571 "strip_size_kb": 64, 00:14:27.571 "state": "online", 00:14:27.571 "raid_level": "raid5f", 00:14:27.571 "superblock": true, 00:14:27.571 "num_base_bdevs": 4, 00:14:27.571 "num_base_bdevs_discovered": 4, 00:14:27.571 "num_base_bdevs_operational": 4, 00:14:27.571 "process": { 00:14:27.571 "type": "rebuild", 00:14:27.571 "target": "spare", 00:14:27.571 "progress": { 00:14:27.571 "blocks": 19200, 00:14:27.571 "percent": 10 00:14:27.571 } 00:14:27.571 }, 00:14:27.571 "base_bdevs_list": [ 00:14:27.571 { 00:14:27.571 "name": "spare", 00:14:27.571 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:27.571 "is_configured": true, 00:14:27.571 "data_offset": 2048, 00:14:27.571 "data_size": 63488 00:14:27.571 }, 00:14:27.571 { 00:14:27.571 "name": "BaseBdev2", 00:14:27.571 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:27.571 "is_configured": true, 00:14:27.571 "data_offset": 2048, 00:14:27.571 "data_size": 63488 00:14:27.571 }, 00:14:27.571 { 00:14:27.571 "name": "BaseBdev3", 00:14:27.571 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:27.571 "is_configured": true, 00:14:27.571 "data_offset": 2048, 00:14:27.571 "data_size": 63488 00:14:27.571 }, 00:14:27.571 { 00:14:27.571 "name": "BaseBdev4", 00:14:27.571 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:27.571 "is_configured": true, 00:14:27.571 "data_offset": 2048, 00:14:27.571 "data_size": 63488 00:14:27.571 } 00:14:27.571 ] 00:14:27.571 }' 00:14:27.571 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.571 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.571 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:27.833 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=519 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.833 "name": "raid_bdev1", 00:14:27.833 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:27.833 "strip_size_kb": 64, 00:14:27.833 "state": "online", 00:14:27.833 "raid_level": "raid5f", 00:14:27.833 "superblock": true, 00:14:27.833 "num_base_bdevs": 4, 00:14:27.833 "num_base_bdevs_discovered": 4, 00:14:27.833 "num_base_bdevs_operational": 4, 00:14:27.833 "process": { 00:14:27.833 "type": "rebuild", 00:14:27.833 "target": "spare", 00:14:27.833 "progress": { 00:14:27.833 "blocks": 21120, 00:14:27.833 "percent": 11 00:14:27.833 } 00:14:27.833 }, 00:14:27.833 "base_bdevs_list": [ 00:14:27.833 { 00:14:27.833 "name": "spare", 00:14:27.833 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:27.833 "is_configured": true, 00:14:27.833 "data_offset": 2048, 00:14:27.833 "data_size": 63488 00:14:27.833 }, 00:14:27.833 { 00:14:27.833 "name": "BaseBdev2", 00:14:27.833 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:27.833 "is_configured": true, 00:14:27.833 "data_offset": 2048, 00:14:27.833 "data_size": 63488 00:14:27.833 }, 00:14:27.833 { 00:14:27.833 "name": "BaseBdev3", 00:14:27.833 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:27.833 "is_configured": true, 00:14:27.833 "data_offset": 2048, 00:14:27.833 "data_size": 63488 00:14:27.833 }, 00:14:27.833 { 00:14:27.833 "name": "BaseBdev4", 00:14:27.833 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:27.833 "is_configured": true, 00:14:27.833 "data_offset": 2048, 00:14:27.833 "data_size": 63488 00:14:27.833 } 00:14:27.833 ] 00:14:27.833 }' 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.833 21:46:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:28.772 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.772 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.772 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.772 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.772 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.772 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.772 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.772 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.772 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.772 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.032 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.032 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.032 "name": "raid_bdev1", 00:14:29.032 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:29.032 "strip_size_kb": 64, 00:14:29.032 "state": "online", 00:14:29.032 "raid_level": "raid5f", 00:14:29.032 "superblock": true, 00:14:29.032 "num_base_bdevs": 4, 00:14:29.032 "num_base_bdevs_discovered": 4, 00:14:29.032 "num_base_bdevs_operational": 4, 00:14:29.032 "process": { 00:14:29.032 "type": "rebuild", 00:14:29.032 "target": "spare", 00:14:29.032 "progress": { 00:14:29.032 "blocks": 42240, 00:14:29.032 "percent": 22 00:14:29.032 } 00:14:29.032 }, 00:14:29.032 "base_bdevs_list": [ 00:14:29.032 { 00:14:29.032 "name": "spare", 00:14:29.032 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:29.032 "is_configured": true, 00:14:29.032 "data_offset": 2048, 00:14:29.032 "data_size": 63488 00:14:29.032 }, 00:14:29.032 { 00:14:29.032 "name": "BaseBdev2", 00:14:29.032 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:29.032 "is_configured": true, 00:14:29.032 "data_offset": 2048, 00:14:29.032 "data_size": 63488 00:14:29.032 }, 00:14:29.032 { 00:14:29.032 "name": "BaseBdev3", 00:14:29.032 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:29.032 "is_configured": true, 00:14:29.032 "data_offset": 2048, 00:14:29.032 "data_size": 63488 00:14:29.032 }, 00:14:29.032 { 00:14:29.032 "name": "BaseBdev4", 00:14:29.032 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:29.032 "is_configured": true, 00:14:29.032 "data_offset": 2048, 00:14:29.032 "data_size": 63488 00:14:29.032 } 00:14:29.032 ] 00:14:29.032 }' 00:14:29.032 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.032 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.032 21:46:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.032 21:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.032 21:46:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.970 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.970 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.970 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.970 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.970 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.970 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.970 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.970 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.970 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.970 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.970 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.970 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.970 "name": "raid_bdev1", 00:14:29.970 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:29.970 "strip_size_kb": 64, 00:14:29.970 "state": "online", 00:14:29.970 "raid_level": "raid5f", 00:14:29.970 "superblock": true, 00:14:29.970 "num_base_bdevs": 4, 00:14:29.970 "num_base_bdevs_discovered": 4, 00:14:29.970 "num_base_bdevs_operational": 4, 00:14:29.970 "process": { 00:14:29.970 "type": "rebuild", 00:14:29.970 "target": "spare", 00:14:29.970 "progress": { 00:14:29.970 "blocks": 65280, 00:14:29.970 "percent": 34 00:14:29.970 } 00:14:29.970 }, 00:14:29.970 "base_bdevs_list": [ 00:14:29.970 { 00:14:29.970 "name": "spare", 00:14:29.970 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:29.970 "is_configured": true, 00:14:29.970 "data_offset": 2048, 00:14:29.970 "data_size": 63488 00:14:29.970 }, 00:14:29.970 { 00:14:29.970 "name": "BaseBdev2", 00:14:29.970 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:29.970 "is_configured": true, 00:14:29.970 "data_offset": 2048, 00:14:29.970 "data_size": 63488 00:14:29.970 }, 00:14:29.970 { 00:14:29.970 "name": "BaseBdev3", 00:14:29.970 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:29.970 "is_configured": true, 00:14:29.970 "data_offset": 2048, 00:14:29.970 "data_size": 63488 00:14:29.970 }, 00:14:29.970 { 00:14:29.970 "name": "BaseBdev4", 00:14:29.970 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:29.970 "is_configured": true, 00:14:29.970 "data_offset": 2048, 00:14:29.970 "data_size": 63488 00:14:29.970 } 00:14:29.970 ] 00:14:29.970 }' 00:14:29.970 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.229 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.229 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.229 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.229 21:46:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.168 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:31.168 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.168 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:31.168 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:31.168 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:31.168 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:31.168 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.168 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.168 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.168 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.168 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.168 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:31.168 "name": "raid_bdev1", 00:14:31.168 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:31.168 "strip_size_kb": 64, 00:14:31.168 "state": "online", 00:14:31.168 "raid_level": "raid5f", 00:14:31.168 "superblock": true, 00:14:31.168 "num_base_bdevs": 4, 00:14:31.168 "num_base_bdevs_discovered": 4, 00:14:31.168 "num_base_bdevs_operational": 4, 00:14:31.168 "process": { 00:14:31.168 "type": "rebuild", 00:14:31.168 "target": "spare", 00:14:31.168 "progress": { 00:14:31.168 "blocks": 86400, 00:14:31.168 "percent": 45 00:14:31.168 } 00:14:31.168 }, 00:14:31.168 "base_bdevs_list": [ 00:14:31.168 { 00:14:31.168 "name": "spare", 00:14:31.168 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:31.168 "is_configured": true, 00:14:31.168 "data_offset": 2048, 00:14:31.168 "data_size": 63488 00:14:31.168 }, 00:14:31.168 { 00:14:31.168 "name": "BaseBdev2", 00:14:31.168 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:31.168 "is_configured": true, 00:14:31.168 "data_offset": 2048, 00:14:31.168 "data_size": 63488 00:14:31.168 }, 00:14:31.168 { 00:14:31.168 "name": "BaseBdev3", 00:14:31.168 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:31.168 "is_configured": true, 00:14:31.168 "data_offset": 2048, 00:14:31.168 "data_size": 63488 00:14:31.168 }, 00:14:31.168 { 00:14:31.168 "name": "BaseBdev4", 00:14:31.168 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:31.168 "is_configured": true, 00:14:31.168 "data_offset": 2048, 00:14:31.168 "data_size": 63488 00:14:31.168 } 00:14:31.168 ] 00:14:31.168 }' 00:14:31.168 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:31.168 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.427 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:31.427 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.427 21:46:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:32.367 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:32.367 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.367 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.367 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.367 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.367 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.367 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.367 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.367 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.367 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.367 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.367 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.367 "name": "raid_bdev1", 00:14:32.367 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:32.367 "strip_size_kb": 64, 00:14:32.367 "state": "online", 00:14:32.367 "raid_level": "raid5f", 00:14:32.367 "superblock": true, 00:14:32.367 "num_base_bdevs": 4, 00:14:32.367 "num_base_bdevs_discovered": 4, 00:14:32.367 "num_base_bdevs_operational": 4, 00:14:32.367 "process": { 00:14:32.367 "type": "rebuild", 00:14:32.367 "target": "spare", 00:14:32.367 "progress": { 00:14:32.367 "blocks": 109440, 00:14:32.367 "percent": 57 00:14:32.367 } 00:14:32.367 }, 00:14:32.367 "base_bdevs_list": [ 00:14:32.367 { 00:14:32.367 "name": "spare", 00:14:32.367 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:32.367 "is_configured": true, 00:14:32.367 "data_offset": 2048, 00:14:32.367 "data_size": 63488 00:14:32.367 }, 00:14:32.367 { 00:14:32.367 "name": "BaseBdev2", 00:14:32.367 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:32.367 "is_configured": true, 00:14:32.367 "data_offset": 2048, 00:14:32.367 "data_size": 63488 00:14:32.367 }, 00:14:32.367 { 00:14:32.367 "name": "BaseBdev3", 00:14:32.367 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:32.367 "is_configured": true, 00:14:32.367 "data_offset": 2048, 00:14:32.367 "data_size": 63488 00:14:32.367 }, 00:14:32.367 { 00:14:32.367 "name": "BaseBdev4", 00:14:32.367 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:32.367 "is_configured": true, 00:14:32.367 "data_offset": 2048, 00:14:32.368 "data_size": 63488 00:14:32.368 } 00:14:32.368 ] 00:14:32.368 }' 00:14:32.368 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.368 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.368 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.368 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.368 21:46:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:33.337 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:33.596 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.596 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.596 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.596 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.596 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.596 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.596 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.596 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.596 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.597 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.597 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.597 "name": "raid_bdev1", 00:14:33.597 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:33.597 "strip_size_kb": 64, 00:14:33.597 "state": "online", 00:14:33.597 "raid_level": "raid5f", 00:14:33.597 "superblock": true, 00:14:33.597 "num_base_bdevs": 4, 00:14:33.597 "num_base_bdevs_discovered": 4, 00:14:33.597 "num_base_bdevs_operational": 4, 00:14:33.597 "process": { 00:14:33.597 "type": "rebuild", 00:14:33.597 "target": "spare", 00:14:33.597 "progress": { 00:14:33.597 "blocks": 130560, 00:14:33.597 "percent": 68 00:14:33.597 } 00:14:33.597 }, 00:14:33.597 "base_bdevs_list": [ 00:14:33.597 { 00:14:33.597 "name": "spare", 00:14:33.597 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:33.597 "is_configured": true, 00:14:33.597 "data_offset": 2048, 00:14:33.597 "data_size": 63488 00:14:33.597 }, 00:14:33.597 { 00:14:33.597 "name": "BaseBdev2", 00:14:33.597 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:33.597 "is_configured": true, 00:14:33.597 "data_offset": 2048, 00:14:33.597 "data_size": 63488 00:14:33.597 }, 00:14:33.597 { 00:14:33.597 "name": "BaseBdev3", 00:14:33.597 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:33.597 "is_configured": true, 00:14:33.597 "data_offset": 2048, 00:14:33.597 "data_size": 63488 00:14:33.597 }, 00:14:33.597 { 00:14:33.597 "name": "BaseBdev4", 00:14:33.597 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:33.597 "is_configured": true, 00:14:33.597 "data_offset": 2048, 00:14:33.597 "data_size": 63488 00:14:33.597 } 00:14:33.597 ] 00:14:33.597 }' 00:14:33.597 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.597 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.597 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.597 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.597 21:46:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:34.535 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:34.535 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.535 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.535 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.535 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.535 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.535 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.535 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.535 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.535 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.535 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.794 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.794 "name": "raid_bdev1", 00:14:34.794 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:34.794 "strip_size_kb": 64, 00:14:34.794 "state": "online", 00:14:34.794 "raid_level": "raid5f", 00:14:34.794 "superblock": true, 00:14:34.794 "num_base_bdevs": 4, 00:14:34.794 "num_base_bdevs_discovered": 4, 00:14:34.794 "num_base_bdevs_operational": 4, 00:14:34.794 "process": { 00:14:34.794 "type": "rebuild", 00:14:34.794 "target": "spare", 00:14:34.794 "progress": { 00:14:34.794 "blocks": 153600, 00:14:34.794 "percent": 80 00:14:34.794 } 00:14:34.794 }, 00:14:34.794 "base_bdevs_list": [ 00:14:34.794 { 00:14:34.794 "name": "spare", 00:14:34.794 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:34.794 "is_configured": true, 00:14:34.794 "data_offset": 2048, 00:14:34.794 "data_size": 63488 00:14:34.794 }, 00:14:34.794 { 00:14:34.794 "name": "BaseBdev2", 00:14:34.794 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:34.794 "is_configured": true, 00:14:34.794 "data_offset": 2048, 00:14:34.794 "data_size": 63488 00:14:34.794 }, 00:14:34.794 { 00:14:34.794 "name": "BaseBdev3", 00:14:34.794 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:34.794 "is_configured": true, 00:14:34.794 "data_offset": 2048, 00:14:34.794 "data_size": 63488 00:14:34.794 }, 00:14:34.795 { 00:14:34.795 "name": "BaseBdev4", 00:14:34.795 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:34.795 "is_configured": true, 00:14:34.795 "data_offset": 2048, 00:14:34.795 "data_size": 63488 00:14:34.795 } 00:14:34.795 ] 00:14:34.795 }' 00:14:34.795 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.795 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.795 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.795 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.795 21:46:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:35.734 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.734 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.734 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.734 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.734 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.734 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.734 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.734 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.734 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.734 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.734 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.734 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.734 "name": "raid_bdev1", 00:14:35.734 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:35.734 "strip_size_kb": 64, 00:14:35.734 "state": "online", 00:14:35.734 "raid_level": "raid5f", 00:14:35.734 "superblock": true, 00:14:35.734 "num_base_bdevs": 4, 00:14:35.734 "num_base_bdevs_discovered": 4, 00:14:35.734 "num_base_bdevs_operational": 4, 00:14:35.734 "process": { 00:14:35.734 "type": "rebuild", 00:14:35.734 "target": "spare", 00:14:35.734 "progress": { 00:14:35.734 "blocks": 174720, 00:14:35.734 "percent": 91 00:14:35.734 } 00:14:35.734 }, 00:14:35.734 "base_bdevs_list": [ 00:14:35.734 { 00:14:35.734 "name": "spare", 00:14:35.734 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:35.734 "is_configured": true, 00:14:35.734 "data_offset": 2048, 00:14:35.734 "data_size": 63488 00:14:35.734 }, 00:14:35.734 { 00:14:35.734 "name": "BaseBdev2", 00:14:35.734 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:35.734 "is_configured": true, 00:14:35.734 "data_offset": 2048, 00:14:35.734 "data_size": 63488 00:14:35.734 }, 00:14:35.734 { 00:14:35.734 "name": "BaseBdev3", 00:14:35.734 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:35.734 "is_configured": true, 00:14:35.734 "data_offset": 2048, 00:14:35.734 "data_size": 63488 00:14:35.734 }, 00:14:35.734 { 00:14:35.734 "name": "BaseBdev4", 00:14:35.734 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:35.734 "is_configured": true, 00:14:35.734 "data_offset": 2048, 00:14:35.734 "data_size": 63488 00:14:35.734 } 00:14:35.734 ] 00:14:35.734 }' 00:14:35.734 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.994 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.994 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.994 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.994 21:46:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.567 [2024-11-27 21:46:59.628785] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:36.567 [2024-11-27 21:46:59.628960] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:36.567 [2024-11-27 21:46:59.629118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.827 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.827 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.827 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.827 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.828 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.828 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.828 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.828 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.828 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.828 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.087 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.087 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.087 "name": "raid_bdev1", 00:14:37.087 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:37.087 "strip_size_kb": 64, 00:14:37.087 "state": "online", 00:14:37.087 "raid_level": "raid5f", 00:14:37.087 "superblock": true, 00:14:37.087 "num_base_bdevs": 4, 00:14:37.087 "num_base_bdevs_discovered": 4, 00:14:37.087 "num_base_bdevs_operational": 4, 00:14:37.087 "base_bdevs_list": [ 00:14:37.087 { 00:14:37.087 "name": "spare", 00:14:37.087 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:37.087 "is_configured": true, 00:14:37.087 "data_offset": 2048, 00:14:37.087 "data_size": 63488 00:14:37.087 }, 00:14:37.087 { 00:14:37.087 "name": "BaseBdev2", 00:14:37.087 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:37.087 "is_configured": true, 00:14:37.087 "data_offset": 2048, 00:14:37.087 "data_size": 63488 00:14:37.087 }, 00:14:37.087 { 00:14:37.087 "name": "BaseBdev3", 00:14:37.087 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:37.087 "is_configured": true, 00:14:37.087 "data_offset": 2048, 00:14:37.087 "data_size": 63488 00:14:37.087 }, 00:14:37.087 { 00:14:37.087 "name": "BaseBdev4", 00:14:37.087 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:37.088 "is_configured": true, 00:14:37.088 "data_offset": 2048, 00:14:37.088 "data_size": 63488 00:14:37.088 } 00:14:37.088 ] 00:14:37.088 }' 00:14:37.088 21:46:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.088 "name": "raid_bdev1", 00:14:37.088 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:37.088 "strip_size_kb": 64, 00:14:37.088 "state": "online", 00:14:37.088 "raid_level": "raid5f", 00:14:37.088 "superblock": true, 00:14:37.088 "num_base_bdevs": 4, 00:14:37.088 "num_base_bdevs_discovered": 4, 00:14:37.088 "num_base_bdevs_operational": 4, 00:14:37.088 "base_bdevs_list": [ 00:14:37.088 { 00:14:37.088 "name": "spare", 00:14:37.088 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:37.088 "is_configured": true, 00:14:37.088 "data_offset": 2048, 00:14:37.088 "data_size": 63488 00:14:37.088 }, 00:14:37.088 { 00:14:37.088 "name": "BaseBdev2", 00:14:37.088 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:37.088 "is_configured": true, 00:14:37.088 "data_offset": 2048, 00:14:37.088 "data_size": 63488 00:14:37.088 }, 00:14:37.088 { 00:14:37.088 "name": "BaseBdev3", 00:14:37.088 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:37.088 "is_configured": true, 00:14:37.088 "data_offset": 2048, 00:14:37.088 "data_size": 63488 00:14:37.088 }, 00:14:37.088 { 00:14:37.088 "name": "BaseBdev4", 00:14:37.088 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:37.088 "is_configured": true, 00:14:37.088 "data_offset": 2048, 00:14:37.088 "data_size": 63488 00:14:37.088 } 00:14:37.088 ] 00:14:37.088 }' 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.088 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.348 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.348 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.348 "name": "raid_bdev1", 00:14:37.348 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:37.348 "strip_size_kb": 64, 00:14:37.348 "state": "online", 00:14:37.348 "raid_level": "raid5f", 00:14:37.348 "superblock": true, 00:14:37.348 "num_base_bdevs": 4, 00:14:37.348 "num_base_bdevs_discovered": 4, 00:14:37.348 "num_base_bdevs_operational": 4, 00:14:37.348 "base_bdevs_list": [ 00:14:37.348 { 00:14:37.348 "name": "spare", 00:14:37.348 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:37.348 "is_configured": true, 00:14:37.348 "data_offset": 2048, 00:14:37.348 "data_size": 63488 00:14:37.348 }, 00:14:37.348 { 00:14:37.348 "name": "BaseBdev2", 00:14:37.348 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:37.348 "is_configured": true, 00:14:37.348 "data_offset": 2048, 00:14:37.348 "data_size": 63488 00:14:37.348 }, 00:14:37.348 { 00:14:37.348 "name": "BaseBdev3", 00:14:37.348 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:37.348 "is_configured": true, 00:14:37.348 "data_offset": 2048, 00:14:37.348 "data_size": 63488 00:14:37.348 }, 00:14:37.348 { 00:14:37.348 "name": "BaseBdev4", 00:14:37.348 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:37.348 "is_configured": true, 00:14:37.348 "data_offset": 2048, 00:14:37.348 "data_size": 63488 00:14:37.348 } 00:14:37.348 ] 00:14:37.348 }' 00:14:37.348 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.348 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.607 [2024-11-27 21:47:00.596677] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:37.607 [2024-11-27 21:47:00.596774] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.607 [2024-11-27 21:47:00.596901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.607 [2024-11-27 21:47:00.597075] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.607 [2024-11-27 21:47:00.597135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:37.607 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:37.866 /dev/nbd0 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.866 1+0 records in 00:14:37.866 1+0 records out 00:14:37.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000496815 s, 8.2 MB/s 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:37.866 21:47:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:38.124 /dev/nbd1 00:14:38.124 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:38.124 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:38.124 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:38.124 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:38.124 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:38.124 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:38.124 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:38.124 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:38.124 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:38.124 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:38.124 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:38.124 1+0 records in 00:14:38.124 1+0 records out 00:14:38.124 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524342 s, 7.8 MB/s 00:14:38.124 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.124 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:38.125 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:38.125 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:38.125 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:38.125 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:38.125 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:38.125 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:38.125 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:38.125 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.125 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:38.125 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:38.125 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:38.125 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.125 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:38.383 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:38.383 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:38.383 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:38.383 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.383 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.383 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:38.383 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:38.383 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.383 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.383 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.643 [2024-11-27 21:47:01.672631] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:38.643 [2024-11-27 21:47:01.672707] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:38.643 [2024-11-27 21:47:01.672737] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:38.643 [2024-11-27 21:47:01.672753] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:38.643 [2024-11-27 21:47:01.675022] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:38.643 [2024-11-27 21:47:01.675064] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:38.643 [2024-11-27 21:47:01.675174] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:38.643 [2024-11-27 21:47:01.675238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.643 [2024-11-27 21:47:01.675424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:38.643 [2024-11-27 21:47:01.675581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:38.643 [2024-11-27 21:47:01.675651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:38.643 spare 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.643 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.903 [2024-11-27 21:47:01.775550] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:14:38.903 [2024-11-27 21:47:01.775576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:14:38.903 [2024-11-27 21:47:01.775846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000045820 00:14:38.903 [2024-11-27 21:47:01.776375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:14:38.903 [2024-11-27 21:47:01.776435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:14:38.903 [2024-11-27 21:47:01.776619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.903 "name": "raid_bdev1", 00:14:38.903 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:38.903 "strip_size_kb": 64, 00:14:38.903 "state": "online", 00:14:38.903 "raid_level": "raid5f", 00:14:38.903 "superblock": true, 00:14:38.903 "num_base_bdevs": 4, 00:14:38.903 "num_base_bdevs_discovered": 4, 00:14:38.903 "num_base_bdevs_operational": 4, 00:14:38.903 "base_bdevs_list": [ 00:14:38.903 { 00:14:38.903 "name": "spare", 00:14:38.903 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:38.903 "is_configured": true, 00:14:38.903 "data_offset": 2048, 00:14:38.903 "data_size": 63488 00:14:38.903 }, 00:14:38.903 { 00:14:38.903 "name": "BaseBdev2", 00:14:38.903 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:38.903 "is_configured": true, 00:14:38.903 "data_offset": 2048, 00:14:38.903 "data_size": 63488 00:14:38.903 }, 00:14:38.903 { 00:14:38.903 "name": "BaseBdev3", 00:14:38.903 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:38.903 "is_configured": true, 00:14:38.903 "data_offset": 2048, 00:14:38.903 "data_size": 63488 00:14:38.903 }, 00:14:38.903 { 00:14:38.903 "name": "BaseBdev4", 00:14:38.903 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:38.903 "is_configured": true, 00:14:38.903 "data_offset": 2048, 00:14:38.903 "data_size": 63488 00:14:38.903 } 00:14:38.903 ] 00:14:38.903 }' 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.903 21:47:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.162 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:39.163 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.163 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:39.163 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:39.163 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.163 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.163 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.163 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.163 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.163 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.163 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.163 "name": "raid_bdev1", 00:14:39.163 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:39.163 "strip_size_kb": 64, 00:14:39.163 "state": "online", 00:14:39.163 "raid_level": "raid5f", 00:14:39.163 "superblock": true, 00:14:39.163 "num_base_bdevs": 4, 00:14:39.163 "num_base_bdevs_discovered": 4, 00:14:39.163 "num_base_bdevs_operational": 4, 00:14:39.163 "base_bdevs_list": [ 00:14:39.163 { 00:14:39.163 "name": "spare", 00:14:39.163 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:39.163 "is_configured": true, 00:14:39.163 "data_offset": 2048, 00:14:39.163 "data_size": 63488 00:14:39.163 }, 00:14:39.163 { 00:14:39.163 "name": "BaseBdev2", 00:14:39.163 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:39.163 "is_configured": true, 00:14:39.163 "data_offset": 2048, 00:14:39.163 "data_size": 63488 00:14:39.163 }, 00:14:39.163 { 00:14:39.163 "name": "BaseBdev3", 00:14:39.163 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:39.163 "is_configured": true, 00:14:39.163 "data_offset": 2048, 00:14:39.163 "data_size": 63488 00:14:39.163 }, 00:14:39.163 { 00:14:39.163 "name": "BaseBdev4", 00:14:39.163 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:39.163 "is_configured": true, 00:14:39.163 "data_offset": 2048, 00:14:39.163 "data_size": 63488 00:14:39.163 } 00:14:39.163 ] 00:14:39.163 }' 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.422 [2024-11-27 21:47:02.407547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.422 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.423 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.423 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.423 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.423 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.423 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.423 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.423 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.423 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:39.423 "name": "raid_bdev1", 00:14:39.423 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:39.423 "strip_size_kb": 64, 00:14:39.423 "state": "online", 00:14:39.423 "raid_level": "raid5f", 00:14:39.423 "superblock": true, 00:14:39.423 "num_base_bdevs": 4, 00:14:39.423 "num_base_bdevs_discovered": 3, 00:14:39.423 "num_base_bdevs_operational": 3, 00:14:39.423 "base_bdevs_list": [ 00:14:39.423 { 00:14:39.423 "name": null, 00:14:39.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.423 "is_configured": false, 00:14:39.423 "data_offset": 0, 00:14:39.423 "data_size": 63488 00:14:39.423 }, 00:14:39.423 { 00:14:39.423 "name": "BaseBdev2", 00:14:39.423 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:39.423 "is_configured": true, 00:14:39.423 "data_offset": 2048, 00:14:39.423 "data_size": 63488 00:14:39.423 }, 00:14:39.423 { 00:14:39.423 "name": "BaseBdev3", 00:14:39.423 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:39.423 "is_configured": true, 00:14:39.423 "data_offset": 2048, 00:14:39.423 "data_size": 63488 00:14:39.423 }, 00:14:39.423 { 00:14:39.423 "name": "BaseBdev4", 00:14:39.423 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:39.423 "is_configured": true, 00:14:39.423 "data_offset": 2048, 00:14:39.423 "data_size": 63488 00:14:39.423 } 00:14:39.423 ] 00:14:39.423 }' 00:14:39.423 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:39.423 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.991 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:39.991 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.991 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.991 [2024-11-27 21:47:02.846820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.991 [2024-11-27 21:47:02.847076] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:39.991 [2024-11-27 21:47:02.847147] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:39.991 [2024-11-27 21:47:02.847223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.991 [2024-11-27 21:47:02.852135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000458f0 00:14:39.991 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.991 21:47:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:39.991 [2024-11-27 21:47:02.854367] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.929 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.929 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.929 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.929 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.929 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.929 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.929 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.929 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.929 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.930 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.930 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.930 "name": "raid_bdev1", 00:14:40.930 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:40.930 "strip_size_kb": 64, 00:14:40.930 "state": "online", 00:14:40.930 "raid_level": "raid5f", 00:14:40.930 "superblock": true, 00:14:40.930 "num_base_bdevs": 4, 00:14:40.930 "num_base_bdevs_discovered": 4, 00:14:40.930 "num_base_bdevs_operational": 4, 00:14:40.930 "process": { 00:14:40.930 "type": "rebuild", 00:14:40.930 "target": "spare", 00:14:40.930 "progress": { 00:14:40.930 "blocks": 19200, 00:14:40.930 "percent": 10 00:14:40.930 } 00:14:40.930 }, 00:14:40.930 "base_bdevs_list": [ 00:14:40.930 { 00:14:40.930 "name": "spare", 00:14:40.930 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:40.930 "is_configured": true, 00:14:40.930 "data_offset": 2048, 00:14:40.930 "data_size": 63488 00:14:40.930 }, 00:14:40.930 { 00:14:40.930 "name": "BaseBdev2", 00:14:40.930 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:40.930 "is_configured": true, 00:14:40.930 "data_offset": 2048, 00:14:40.930 "data_size": 63488 00:14:40.930 }, 00:14:40.930 { 00:14:40.930 "name": "BaseBdev3", 00:14:40.930 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:40.930 "is_configured": true, 00:14:40.930 "data_offset": 2048, 00:14:40.930 "data_size": 63488 00:14:40.930 }, 00:14:40.930 { 00:14:40.930 "name": "BaseBdev4", 00:14:40.930 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:40.930 "is_configured": true, 00:14:40.930 "data_offset": 2048, 00:14:40.930 "data_size": 63488 00:14:40.930 } 00:14:40.930 ] 00:14:40.930 }' 00:14:40.930 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.930 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.930 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.930 21:47:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.930 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:40.930 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.930 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.930 [2024-11-27 21:47:04.006186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.190 [2024-11-27 21:47:04.059791] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:41.190 [2024-11-27 21:47:04.059850] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.190 [2024-11-27 21:47:04.059870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.190 [2024-11-27 21:47:04.059877] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.190 "name": "raid_bdev1", 00:14:41.190 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:41.190 "strip_size_kb": 64, 00:14:41.190 "state": "online", 00:14:41.190 "raid_level": "raid5f", 00:14:41.190 "superblock": true, 00:14:41.190 "num_base_bdevs": 4, 00:14:41.190 "num_base_bdevs_discovered": 3, 00:14:41.190 "num_base_bdevs_operational": 3, 00:14:41.190 "base_bdevs_list": [ 00:14:41.190 { 00:14:41.190 "name": null, 00:14:41.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.190 "is_configured": false, 00:14:41.190 "data_offset": 0, 00:14:41.190 "data_size": 63488 00:14:41.190 }, 00:14:41.190 { 00:14:41.190 "name": "BaseBdev2", 00:14:41.190 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:41.190 "is_configured": true, 00:14:41.190 "data_offset": 2048, 00:14:41.190 "data_size": 63488 00:14:41.190 }, 00:14:41.190 { 00:14:41.190 "name": "BaseBdev3", 00:14:41.190 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:41.190 "is_configured": true, 00:14:41.190 "data_offset": 2048, 00:14:41.190 "data_size": 63488 00:14:41.190 }, 00:14:41.190 { 00:14:41.190 "name": "BaseBdev4", 00:14:41.190 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:41.190 "is_configured": true, 00:14:41.190 "data_offset": 2048, 00:14:41.190 "data_size": 63488 00:14:41.190 } 00:14:41.190 ] 00:14:41.190 }' 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.190 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.451 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:41.451 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.451 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.451 [2024-11-27 21:47:04.500504] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:41.451 [2024-11-27 21:47:04.500611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.451 [2024-11-27 21:47:04.500661] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:14:41.451 [2024-11-27 21:47:04.500694] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.451 [2024-11-27 21:47:04.501183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.451 [2024-11-27 21:47:04.501243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:41.451 [2024-11-27 21:47:04.501372] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:41.451 [2024-11-27 21:47:04.501412] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:41.451 [2024-11-27 21:47:04.501472] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:41.451 [2024-11-27 21:47:04.501533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:41.451 [2024-11-27 21:47:04.505380] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000459c0 00:14:41.451 spare 00:14:41.451 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.451 21:47:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:41.451 [2024-11-27 21:47:04.507604] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:42.390 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.390 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.390 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.650 "name": "raid_bdev1", 00:14:42.650 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:42.650 "strip_size_kb": 64, 00:14:42.650 "state": "online", 00:14:42.650 "raid_level": "raid5f", 00:14:42.650 "superblock": true, 00:14:42.650 "num_base_bdevs": 4, 00:14:42.650 "num_base_bdevs_discovered": 4, 00:14:42.650 "num_base_bdevs_operational": 4, 00:14:42.650 "process": { 00:14:42.650 "type": "rebuild", 00:14:42.650 "target": "spare", 00:14:42.650 "progress": { 00:14:42.650 "blocks": 19200, 00:14:42.650 "percent": 10 00:14:42.650 } 00:14:42.650 }, 00:14:42.650 "base_bdevs_list": [ 00:14:42.650 { 00:14:42.650 "name": "spare", 00:14:42.650 "uuid": "f3d4f37a-3469-5b4f-bed3-95aa065f372e", 00:14:42.650 "is_configured": true, 00:14:42.650 "data_offset": 2048, 00:14:42.650 "data_size": 63488 00:14:42.650 }, 00:14:42.650 { 00:14:42.650 "name": "BaseBdev2", 00:14:42.650 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:42.650 "is_configured": true, 00:14:42.650 "data_offset": 2048, 00:14:42.650 "data_size": 63488 00:14:42.650 }, 00:14:42.650 { 00:14:42.650 "name": "BaseBdev3", 00:14:42.650 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:42.650 "is_configured": true, 00:14:42.650 "data_offset": 2048, 00:14:42.650 "data_size": 63488 00:14:42.650 }, 00:14:42.650 { 00:14:42.650 "name": "BaseBdev4", 00:14:42.650 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:42.650 "is_configured": true, 00:14:42.650 "data_offset": 2048, 00:14:42.650 "data_size": 63488 00:14:42.650 } 00:14:42.650 ] 00:14:42.650 }' 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.650 [2024-11-27 21:47:05.667935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.650 [2024-11-27 21:47:05.713066] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:42.650 [2024-11-27 21:47:05.713130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.650 [2024-11-27 21:47:05.713147] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.650 [2024-11-27 21:47:05.713155] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.650 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.910 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.910 "name": "raid_bdev1", 00:14:42.910 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:42.910 "strip_size_kb": 64, 00:14:42.910 "state": "online", 00:14:42.910 "raid_level": "raid5f", 00:14:42.910 "superblock": true, 00:14:42.910 "num_base_bdevs": 4, 00:14:42.910 "num_base_bdevs_discovered": 3, 00:14:42.910 "num_base_bdevs_operational": 3, 00:14:42.910 "base_bdevs_list": [ 00:14:42.910 { 00:14:42.910 "name": null, 00:14:42.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.910 "is_configured": false, 00:14:42.910 "data_offset": 0, 00:14:42.910 "data_size": 63488 00:14:42.910 }, 00:14:42.910 { 00:14:42.910 "name": "BaseBdev2", 00:14:42.910 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:42.910 "is_configured": true, 00:14:42.910 "data_offset": 2048, 00:14:42.910 "data_size": 63488 00:14:42.910 }, 00:14:42.910 { 00:14:42.910 "name": "BaseBdev3", 00:14:42.910 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:42.910 "is_configured": true, 00:14:42.910 "data_offset": 2048, 00:14:42.910 "data_size": 63488 00:14:42.910 }, 00:14:42.910 { 00:14:42.910 "name": "BaseBdev4", 00:14:42.910 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:42.910 "is_configured": true, 00:14:42.910 "data_offset": 2048, 00:14:42.910 "data_size": 63488 00:14:42.910 } 00:14:42.910 ] 00:14:42.910 }' 00:14:42.910 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.910 21:47:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.170 "name": "raid_bdev1", 00:14:43.170 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:43.170 "strip_size_kb": 64, 00:14:43.170 "state": "online", 00:14:43.170 "raid_level": "raid5f", 00:14:43.170 "superblock": true, 00:14:43.170 "num_base_bdevs": 4, 00:14:43.170 "num_base_bdevs_discovered": 3, 00:14:43.170 "num_base_bdevs_operational": 3, 00:14:43.170 "base_bdevs_list": [ 00:14:43.170 { 00:14:43.170 "name": null, 00:14:43.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.170 "is_configured": false, 00:14:43.170 "data_offset": 0, 00:14:43.170 "data_size": 63488 00:14:43.170 }, 00:14:43.170 { 00:14:43.170 "name": "BaseBdev2", 00:14:43.170 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:43.170 "is_configured": true, 00:14:43.170 "data_offset": 2048, 00:14:43.170 "data_size": 63488 00:14:43.170 }, 00:14:43.170 { 00:14:43.170 "name": "BaseBdev3", 00:14:43.170 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:43.170 "is_configured": true, 00:14:43.170 "data_offset": 2048, 00:14:43.170 "data_size": 63488 00:14:43.170 }, 00:14:43.170 { 00:14:43.170 "name": "BaseBdev4", 00:14:43.170 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:43.170 "is_configured": true, 00:14:43.170 "data_offset": 2048, 00:14:43.170 "data_size": 63488 00:14:43.170 } 00:14:43.170 ] 00:14:43.170 }' 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.170 [2024-11-27 21:47:06.281438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:43.170 [2024-11-27 21:47:06.281494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.170 [2024-11-27 21:47:06.281514] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:43.170 [2024-11-27 21:47:06.281525] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.170 [2024-11-27 21:47:06.281929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.170 [2024-11-27 21:47:06.281958] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:43.170 [2024-11-27 21:47:06.282024] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:43.170 [2024-11-27 21:47:06.282044] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:43.170 [2024-11-27 21:47:06.282060] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:43.170 [2024-11-27 21:47:06.282074] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:43.170 BaseBdev1 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.170 21:47:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.548 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.548 "name": "raid_bdev1", 00:14:44.548 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:44.548 "strip_size_kb": 64, 00:14:44.548 "state": "online", 00:14:44.548 "raid_level": "raid5f", 00:14:44.548 "superblock": true, 00:14:44.548 "num_base_bdevs": 4, 00:14:44.548 "num_base_bdevs_discovered": 3, 00:14:44.548 "num_base_bdevs_operational": 3, 00:14:44.548 "base_bdevs_list": [ 00:14:44.548 { 00:14:44.548 "name": null, 00:14:44.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.548 "is_configured": false, 00:14:44.548 "data_offset": 0, 00:14:44.548 "data_size": 63488 00:14:44.548 }, 00:14:44.548 { 00:14:44.548 "name": "BaseBdev2", 00:14:44.548 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:44.548 "is_configured": true, 00:14:44.548 "data_offset": 2048, 00:14:44.548 "data_size": 63488 00:14:44.548 }, 00:14:44.548 { 00:14:44.548 "name": "BaseBdev3", 00:14:44.548 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:44.548 "is_configured": true, 00:14:44.548 "data_offset": 2048, 00:14:44.548 "data_size": 63488 00:14:44.548 }, 00:14:44.548 { 00:14:44.548 "name": "BaseBdev4", 00:14:44.548 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:44.548 "is_configured": true, 00:14:44.549 "data_offset": 2048, 00:14:44.549 "data_size": 63488 00:14:44.549 } 00:14:44.549 ] 00:14:44.549 }' 00:14:44.549 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.549 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.808 "name": "raid_bdev1", 00:14:44.808 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:44.808 "strip_size_kb": 64, 00:14:44.808 "state": "online", 00:14:44.808 "raid_level": "raid5f", 00:14:44.808 "superblock": true, 00:14:44.808 "num_base_bdevs": 4, 00:14:44.808 "num_base_bdevs_discovered": 3, 00:14:44.808 "num_base_bdevs_operational": 3, 00:14:44.808 "base_bdevs_list": [ 00:14:44.808 { 00:14:44.808 "name": null, 00:14:44.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.808 "is_configured": false, 00:14:44.808 "data_offset": 0, 00:14:44.808 "data_size": 63488 00:14:44.808 }, 00:14:44.808 { 00:14:44.808 "name": "BaseBdev2", 00:14:44.808 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:44.808 "is_configured": true, 00:14:44.808 "data_offset": 2048, 00:14:44.808 "data_size": 63488 00:14:44.808 }, 00:14:44.808 { 00:14:44.808 "name": "BaseBdev3", 00:14:44.808 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:44.808 "is_configured": true, 00:14:44.808 "data_offset": 2048, 00:14:44.808 "data_size": 63488 00:14:44.808 }, 00:14:44.808 { 00:14:44.808 "name": "BaseBdev4", 00:14:44.808 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:44.808 "is_configured": true, 00:14:44.808 "data_offset": 2048, 00:14:44.808 "data_size": 63488 00:14:44.808 } 00:14:44.808 ] 00:14:44.808 }' 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.808 [2024-11-27 21:47:07.870738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:44.808 [2024-11-27 21:47:07.870926] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:44.808 [2024-11-27 21:47:07.870948] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:44.808 request: 00:14:44.808 { 00:14:44.808 "base_bdev": "BaseBdev1", 00:14:44.808 "raid_bdev": "raid_bdev1", 00:14:44.808 "method": "bdev_raid_add_base_bdev", 00:14:44.808 "req_id": 1 00:14:44.808 } 00:14:44.808 Got JSON-RPC error response 00:14:44.808 response: 00:14:44.808 { 00:14:44.808 "code": -22, 00:14:44.808 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:44.808 } 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:44.808 21:47:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:46.195 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:46.195 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.195 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.195 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:46.195 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:46.196 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:46.196 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.196 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.196 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.223 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.223 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.223 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.223 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.223 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.223 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.223 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.223 "name": "raid_bdev1", 00:14:46.223 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:46.223 "strip_size_kb": 64, 00:14:46.223 "state": "online", 00:14:46.223 "raid_level": "raid5f", 00:14:46.223 "superblock": true, 00:14:46.223 "num_base_bdevs": 4, 00:14:46.223 "num_base_bdevs_discovered": 3, 00:14:46.223 "num_base_bdevs_operational": 3, 00:14:46.223 "base_bdevs_list": [ 00:14:46.223 { 00:14:46.223 "name": null, 00:14:46.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.223 "is_configured": false, 00:14:46.223 "data_offset": 0, 00:14:46.223 "data_size": 63488 00:14:46.223 }, 00:14:46.223 { 00:14:46.223 "name": "BaseBdev2", 00:14:46.223 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:46.223 "is_configured": true, 00:14:46.223 "data_offset": 2048, 00:14:46.223 "data_size": 63488 00:14:46.223 }, 00:14:46.223 { 00:14:46.223 "name": "BaseBdev3", 00:14:46.223 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:46.223 "is_configured": true, 00:14:46.223 "data_offset": 2048, 00:14:46.223 "data_size": 63488 00:14:46.223 }, 00:14:46.223 { 00:14:46.223 "name": "BaseBdev4", 00:14:46.223 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:46.223 "is_configured": true, 00:14:46.223 "data_offset": 2048, 00:14:46.223 "data_size": 63488 00:14:46.223 } 00:14:46.223 ] 00:14:46.223 }' 00:14:46.223 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.223 21:47:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.223 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.223 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.223 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.223 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.223 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.223 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.223 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.224 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.224 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.483 "name": "raid_bdev1", 00:14:46.483 "uuid": "0801c9f0-da5a-46d6-97c9-837df90f1920", 00:14:46.483 "strip_size_kb": 64, 00:14:46.483 "state": "online", 00:14:46.483 "raid_level": "raid5f", 00:14:46.483 "superblock": true, 00:14:46.483 "num_base_bdevs": 4, 00:14:46.483 "num_base_bdevs_discovered": 3, 00:14:46.483 "num_base_bdevs_operational": 3, 00:14:46.483 "base_bdevs_list": [ 00:14:46.483 { 00:14:46.483 "name": null, 00:14:46.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.483 "is_configured": false, 00:14:46.483 "data_offset": 0, 00:14:46.483 "data_size": 63488 00:14:46.483 }, 00:14:46.483 { 00:14:46.483 "name": "BaseBdev2", 00:14:46.483 "uuid": "46102b2d-9744-5d90-81d9-9a8b5b8f4276", 00:14:46.483 "is_configured": true, 00:14:46.483 "data_offset": 2048, 00:14:46.483 "data_size": 63488 00:14:46.483 }, 00:14:46.483 { 00:14:46.483 "name": "BaseBdev3", 00:14:46.483 "uuid": "a1f706f3-3e6f-5e7c-af9c-7475e240c603", 00:14:46.483 "is_configured": true, 00:14:46.483 "data_offset": 2048, 00:14:46.483 "data_size": 63488 00:14:46.483 }, 00:14:46.483 { 00:14:46.483 "name": "BaseBdev4", 00:14:46.483 "uuid": "67eeffdb-352b-5a96-8a9b-a2e44c71cac4", 00:14:46.483 "is_configured": true, 00:14:46.483 "data_offset": 2048, 00:14:46.483 "data_size": 63488 00:14:46.483 } 00:14:46.483 ] 00:14:46.483 }' 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 95184 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 95184 ']' 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 95184 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95184 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95184' 00:14:46.483 killing process with pid 95184 00:14:46.483 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 95184 00:14:46.483 Received shutdown signal, test time was about 60.000000 seconds 00:14:46.483 00:14:46.483 Latency(us) 00:14:46.483 [2024-11-27T21:47:09.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.483 [2024-11-27T21:47:09.604Z] =================================================================================================================== 00:14:46.483 [2024-11-27T21:47:09.604Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:46.484 [2024-11-27 21:47:09.452956] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:46.484 [2024-11-27 21:47:09.453068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:46.484 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 95184 00:14:46.484 [2024-11-27 21:47:09.453144] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:46.484 [2024-11-27 21:47:09.453155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:14:46.484 [2024-11-27 21:47:09.501937] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.744 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:46.744 00:14:46.744 real 0m24.824s 00:14:46.744 user 0m31.431s 00:14:46.744 sys 0m2.906s 00:14:46.744 ************************************ 00:14:46.744 END TEST raid5f_rebuild_test_sb 00:14:46.744 ************************************ 00:14:46.744 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.744 21:47:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.744 21:47:09 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:14:46.744 21:47:09 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:14:46.744 21:47:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:46.744 21:47:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.744 21:47:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.744 ************************************ 00:14:46.744 START TEST raid_state_function_test_sb_4k 00:14:46.744 ************************************ 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=95971 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 95971' 00:14:46.744 Process raid pid: 95971 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 95971 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 95971 ']' 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.744 21:47:09 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:47.005 [2024-11-27 21:47:09.876465] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:47.005 [2024-11-27 21:47:09.876685] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.005 [2024-11-27 21:47:10.033648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.005 [2024-11-27 21:47:10.058165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.005 [2024-11-27 21:47:10.100859] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.005 [2024-11-27 21:47:10.100899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.945 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.945 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:47.946 [2024-11-27 21:47:10.711938] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:47.946 [2024-11-27 21:47:10.712082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:47.946 [2024-11-27 21:47:10.712108] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:47.946 [2024-11-27 21:47:10.712119] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.946 "name": "Existed_Raid", 00:14:47.946 "uuid": "b0965324-0169-4735-bd98-63b4f62b69f9", 00:14:47.946 "strip_size_kb": 0, 00:14:47.946 "state": "configuring", 00:14:47.946 "raid_level": "raid1", 00:14:47.946 "superblock": true, 00:14:47.946 "num_base_bdevs": 2, 00:14:47.946 "num_base_bdevs_discovered": 0, 00:14:47.946 "num_base_bdevs_operational": 2, 00:14:47.946 "base_bdevs_list": [ 00:14:47.946 { 00:14:47.946 "name": "BaseBdev1", 00:14:47.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.946 "is_configured": false, 00:14:47.946 "data_offset": 0, 00:14:47.946 "data_size": 0 00:14:47.946 }, 00:14:47.946 { 00:14:47.946 "name": "BaseBdev2", 00:14:47.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.946 "is_configured": false, 00:14:47.946 "data_offset": 0, 00:14:47.946 "data_size": 0 00:14:47.946 } 00:14:47.946 ] 00:14:47.946 }' 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.946 21:47:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:48.206 [2024-11-27 21:47:11.095187] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.206 [2024-11-27 21:47:11.095278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:48.206 [2024-11-27 21:47:11.103188] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:48.206 [2024-11-27 21:47:11.103271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:48.206 [2024-11-27 21:47:11.103296] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.206 [2024-11-27 21:47:11.103330] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:48.206 [2024-11-27 21:47:11.120054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.206 BaseBdev1 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.206 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:48.206 [ 00:14:48.206 { 00:14:48.206 "name": "BaseBdev1", 00:14:48.206 "aliases": [ 00:14:48.206 "3e64bd17-370f-4e17-b2fc-a1caa9cb4638" 00:14:48.206 ], 00:14:48.206 "product_name": "Malloc disk", 00:14:48.206 "block_size": 4096, 00:14:48.206 "num_blocks": 8192, 00:14:48.206 "uuid": "3e64bd17-370f-4e17-b2fc-a1caa9cb4638", 00:14:48.206 "assigned_rate_limits": { 00:14:48.206 "rw_ios_per_sec": 0, 00:14:48.206 "rw_mbytes_per_sec": 0, 00:14:48.206 "r_mbytes_per_sec": 0, 00:14:48.206 "w_mbytes_per_sec": 0 00:14:48.206 }, 00:14:48.206 "claimed": true, 00:14:48.206 "claim_type": "exclusive_write", 00:14:48.206 "zoned": false, 00:14:48.206 "supported_io_types": { 00:14:48.206 "read": true, 00:14:48.206 "write": true, 00:14:48.206 "unmap": true, 00:14:48.206 "flush": true, 00:14:48.206 "reset": true, 00:14:48.206 "nvme_admin": false, 00:14:48.206 "nvme_io": false, 00:14:48.206 "nvme_io_md": false, 00:14:48.206 "write_zeroes": true, 00:14:48.206 "zcopy": true, 00:14:48.206 "get_zone_info": false, 00:14:48.207 "zone_management": false, 00:14:48.207 "zone_append": false, 00:14:48.207 "compare": false, 00:14:48.207 "compare_and_write": false, 00:14:48.207 "abort": true, 00:14:48.207 "seek_hole": false, 00:14:48.207 "seek_data": false, 00:14:48.207 "copy": true, 00:14:48.207 "nvme_iov_md": false 00:14:48.207 }, 00:14:48.207 "memory_domains": [ 00:14:48.207 { 00:14:48.207 "dma_device_id": "system", 00:14:48.207 "dma_device_type": 1 00:14:48.207 }, 00:14:48.207 { 00:14:48.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.207 "dma_device_type": 2 00:14:48.207 } 00:14:48.207 ], 00:14:48.207 "driver_specific": {} 00:14:48.207 } 00:14:48.207 ] 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.207 "name": "Existed_Raid", 00:14:48.207 "uuid": "b763e39b-bd96-4988-abc7-7b8c061ef761", 00:14:48.207 "strip_size_kb": 0, 00:14:48.207 "state": "configuring", 00:14:48.207 "raid_level": "raid1", 00:14:48.207 "superblock": true, 00:14:48.207 "num_base_bdevs": 2, 00:14:48.207 "num_base_bdevs_discovered": 1, 00:14:48.207 "num_base_bdevs_operational": 2, 00:14:48.207 "base_bdevs_list": [ 00:14:48.207 { 00:14:48.207 "name": "BaseBdev1", 00:14:48.207 "uuid": "3e64bd17-370f-4e17-b2fc-a1caa9cb4638", 00:14:48.207 "is_configured": true, 00:14:48.207 "data_offset": 256, 00:14:48.207 "data_size": 7936 00:14:48.207 }, 00:14:48.207 { 00:14:48.207 "name": "BaseBdev2", 00:14:48.207 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.207 "is_configured": false, 00:14:48.207 "data_offset": 0, 00:14:48.207 "data_size": 0 00:14:48.207 } 00:14:48.207 ] 00:14:48.207 }' 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.207 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:48.776 [2024-11-27 21:47:11.611258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:48.776 [2024-11-27 21:47:11.611306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:48.776 [2024-11-27 21:47:11.623259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.776 [2024-11-27 21:47:11.625103] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:48.776 [2024-11-27 21:47:11.625188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.776 "name": "Existed_Raid", 00:14:48.776 "uuid": "20ee1e0f-b52b-439c-a639-76c65a4d75ea", 00:14:48.776 "strip_size_kb": 0, 00:14:48.776 "state": "configuring", 00:14:48.776 "raid_level": "raid1", 00:14:48.776 "superblock": true, 00:14:48.776 "num_base_bdevs": 2, 00:14:48.776 "num_base_bdevs_discovered": 1, 00:14:48.776 "num_base_bdevs_operational": 2, 00:14:48.776 "base_bdevs_list": [ 00:14:48.776 { 00:14:48.776 "name": "BaseBdev1", 00:14:48.776 "uuid": "3e64bd17-370f-4e17-b2fc-a1caa9cb4638", 00:14:48.776 "is_configured": true, 00:14:48.776 "data_offset": 256, 00:14:48.776 "data_size": 7936 00:14:48.776 }, 00:14:48.776 { 00:14:48.776 "name": "BaseBdev2", 00:14:48.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.776 "is_configured": false, 00:14:48.776 "data_offset": 0, 00:14:48.776 "data_size": 0 00:14:48.776 } 00:14:48.776 ] 00:14:48.776 }' 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.776 21:47:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.035 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:14:49.035 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.035 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.035 [2024-11-27 21:47:12.057512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:49.035 [2024-11-27 21:47:12.057838] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:49.035 [2024-11-27 21:47:12.057890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:49.035 BaseBdev2 00:14:49.035 [2024-11-27 21:47:12.058208] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:49.035 [2024-11-27 21:47:12.058384] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:49.036 [2024-11-27 21:47:12.058451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:14:49.036 [2024-11-27 21:47:12.058622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.036 [ 00:14:49.036 { 00:14:49.036 "name": "BaseBdev2", 00:14:49.036 "aliases": [ 00:14:49.036 "ba9acd67-8b2b-44a9-8405-bd0ca0dd5f95" 00:14:49.036 ], 00:14:49.036 "product_name": "Malloc disk", 00:14:49.036 "block_size": 4096, 00:14:49.036 "num_blocks": 8192, 00:14:49.036 "uuid": "ba9acd67-8b2b-44a9-8405-bd0ca0dd5f95", 00:14:49.036 "assigned_rate_limits": { 00:14:49.036 "rw_ios_per_sec": 0, 00:14:49.036 "rw_mbytes_per_sec": 0, 00:14:49.036 "r_mbytes_per_sec": 0, 00:14:49.036 "w_mbytes_per_sec": 0 00:14:49.036 }, 00:14:49.036 "claimed": true, 00:14:49.036 "claim_type": "exclusive_write", 00:14:49.036 "zoned": false, 00:14:49.036 "supported_io_types": { 00:14:49.036 "read": true, 00:14:49.036 "write": true, 00:14:49.036 "unmap": true, 00:14:49.036 "flush": true, 00:14:49.036 "reset": true, 00:14:49.036 "nvme_admin": false, 00:14:49.036 "nvme_io": false, 00:14:49.036 "nvme_io_md": false, 00:14:49.036 "write_zeroes": true, 00:14:49.036 "zcopy": true, 00:14:49.036 "get_zone_info": false, 00:14:49.036 "zone_management": false, 00:14:49.036 "zone_append": false, 00:14:49.036 "compare": false, 00:14:49.036 "compare_and_write": false, 00:14:49.036 "abort": true, 00:14:49.036 "seek_hole": false, 00:14:49.036 "seek_data": false, 00:14:49.036 "copy": true, 00:14:49.036 "nvme_iov_md": false 00:14:49.036 }, 00:14:49.036 "memory_domains": [ 00:14:49.036 { 00:14:49.036 "dma_device_id": "system", 00:14:49.036 "dma_device_type": 1 00:14:49.036 }, 00:14:49.036 { 00:14:49.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.036 "dma_device_type": 2 00:14:49.036 } 00:14:49.036 ], 00:14:49.036 "driver_specific": {} 00:14:49.036 } 00:14:49.036 ] 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.036 "name": "Existed_Raid", 00:14:49.036 "uuid": "20ee1e0f-b52b-439c-a639-76c65a4d75ea", 00:14:49.036 "strip_size_kb": 0, 00:14:49.036 "state": "online", 00:14:49.036 "raid_level": "raid1", 00:14:49.036 "superblock": true, 00:14:49.036 "num_base_bdevs": 2, 00:14:49.036 "num_base_bdevs_discovered": 2, 00:14:49.036 "num_base_bdevs_operational": 2, 00:14:49.036 "base_bdevs_list": [ 00:14:49.036 { 00:14:49.036 "name": "BaseBdev1", 00:14:49.036 "uuid": "3e64bd17-370f-4e17-b2fc-a1caa9cb4638", 00:14:49.036 "is_configured": true, 00:14:49.036 "data_offset": 256, 00:14:49.036 "data_size": 7936 00:14:49.036 }, 00:14:49.036 { 00:14:49.036 "name": "BaseBdev2", 00:14:49.036 "uuid": "ba9acd67-8b2b-44a9-8405-bd0ca0dd5f95", 00:14:49.036 "is_configured": true, 00:14:49.036 "data_offset": 256, 00:14:49.036 "data_size": 7936 00:14:49.036 } 00:14:49.036 ] 00:14:49.036 }' 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.036 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.604 [2024-11-27 21:47:12.552998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:49.604 "name": "Existed_Raid", 00:14:49.604 "aliases": [ 00:14:49.604 "20ee1e0f-b52b-439c-a639-76c65a4d75ea" 00:14:49.604 ], 00:14:49.604 "product_name": "Raid Volume", 00:14:49.604 "block_size": 4096, 00:14:49.604 "num_blocks": 7936, 00:14:49.604 "uuid": "20ee1e0f-b52b-439c-a639-76c65a4d75ea", 00:14:49.604 "assigned_rate_limits": { 00:14:49.604 "rw_ios_per_sec": 0, 00:14:49.604 "rw_mbytes_per_sec": 0, 00:14:49.604 "r_mbytes_per_sec": 0, 00:14:49.604 "w_mbytes_per_sec": 0 00:14:49.604 }, 00:14:49.604 "claimed": false, 00:14:49.604 "zoned": false, 00:14:49.604 "supported_io_types": { 00:14:49.604 "read": true, 00:14:49.604 "write": true, 00:14:49.604 "unmap": false, 00:14:49.604 "flush": false, 00:14:49.604 "reset": true, 00:14:49.604 "nvme_admin": false, 00:14:49.604 "nvme_io": false, 00:14:49.604 "nvme_io_md": false, 00:14:49.604 "write_zeroes": true, 00:14:49.604 "zcopy": false, 00:14:49.604 "get_zone_info": false, 00:14:49.604 "zone_management": false, 00:14:49.604 "zone_append": false, 00:14:49.604 "compare": false, 00:14:49.604 "compare_and_write": false, 00:14:49.604 "abort": false, 00:14:49.604 "seek_hole": false, 00:14:49.604 "seek_data": false, 00:14:49.604 "copy": false, 00:14:49.604 "nvme_iov_md": false 00:14:49.604 }, 00:14:49.604 "memory_domains": [ 00:14:49.604 { 00:14:49.604 "dma_device_id": "system", 00:14:49.604 "dma_device_type": 1 00:14:49.604 }, 00:14:49.604 { 00:14:49.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.604 "dma_device_type": 2 00:14:49.604 }, 00:14:49.604 { 00:14:49.604 "dma_device_id": "system", 00:14:49.604 "dma_device_type": 1 00:14:49.604 }, 00:14:49.604 { 00:14:49.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.604 "dma_device_type": 2 00:14:49.604 } 00:14:49.604 ], 00:14:49.604 "driver_specific": { 00:14:49.604 "raid": { 00:14:49.604 "uuid": "20ee1e0f-b52b-439c-a639-76c65a4d75ea", 00:14:49.604 "strip_size_kb": 0, 00:14:49.604 "state": "online", 00:14:49.604 "raid_level": "raid1", 00:14:49.604 "superblock": true, 00:14:49.604 "num_base_bdevs": 2, 00:14:49.604 "num_base_bdevs_discovered": 2, 00:14:49.604 "num_base_bdevs_operational": 2, 00:14:49.604 "base_bdevs_list": [ 00:14:49.604 { 00:14:49.604 "name": "BaseBdev1", 00:14:49.604 "uuid": "3e64bd17-370f-4e17-b2fc-a1caa9cb4638", 00:14:49.604 "is_configured": true, 00:14:49.604 "data_offset": 256, 00:14:49.604 "data_size": 7936 00:14:49.604 }, 00:14:49.604 { 00:14:49.604 "name": "BaseBdev2", 00:14:49.604 "uuid": "ba9acd67-8b2b-44a9-8405-bd0ca0dd5f95", 00:14:49.604 "is_configured": true, 00:14:49.604 "data_offset": 256, 00:14:49.604 "data_size": 7936 00:14:49.604 } 00:14:49.604 ] 00:14:49.604 } 00:14:49.604 } 00:14:49.604 }' 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:49.604 BaseBdev2' 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.604 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.864 [2024-11-27 21:47:12.800344] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.864 "name": "Existed_Raid", 00:14:49.864 "uuid": "20ee1e0f-b52b-439c-a639-76c65a4d75ea", 00:14:49.864 "strip_size_kb": 0, 00:14:49.864 "state": "online", 00:14:49.864 "raid_level": "raid1", 00:14:49.864 "superblock": true, 00:14:49.864 "num_base_bdevs": 2, 00:14:49.864 "num_base_bdevs_discovered": 1, 00:14:49.864 "num_base_bdevs_operational": 1, 00:14:49.864 "base_bdevs_list": [ 00:14:49.864 { 00:14:49.864 "name": null, 00:14:49.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.864 "is_configured": false, 00:14:49.864 "data_offset": 0, 00:14:49.864 "data_size": 7936 00:14:49.864 }, 00:14:49.864 { 00:14:49.864 "name": "BaseBdev2", 00:14:49.864 "uuid": "ba9acd67-8b2b-44a9-8405-bd0ca0dd5f95", 00:14:49.864 "is_configured": true, 00:14:49.864 "data_offset": 256, 00:14:49.864 "data_size": 7936 00:14:49.864 } 00:14:49.864 ] 00:14:49.864 }' 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.864 21:47:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.123 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:50.123 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.123 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:50.123 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.123 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.123 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.383 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.383 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:50.383 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:50.383 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:50.383 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.383 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.383 [2024-11-27 21:47:13.286912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:50.383 [2024-11-27 21:47:13.287078] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.383 [2024-11-27 21:47:13.298742] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.383 [2024-11-27 21:47:13.298792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.384 [2024-11-27 21:47:13.298815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 95971 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 95971 ']' 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 95971 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95971 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95971' 00:14:50.384 killing process with pid 95971 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 95971 00:14:50.384 [2024-11-27 21:47:13.393649] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:50.384 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 95971 00:14:50.384 [2024-11-27 21:47:13.394623] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.643 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:14:50.643 00:14:50.643 real 0m3.831s 00:14:50.643 user 0m6.006s 00:14:50.643 sys 0m0.848s 00:14:50.643 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.643 21:47:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.643 ************************************ 00:14:50.643 END TEST raid_state_function_test_sb_4k 00:14:50.643 ************************************ 00:14:50.643 21:47:13 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:14:50.643 21:47:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:50.643 21:47:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.643 21:47:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.643 ************************************ 00:14:50.643 START TEST raid_superblock_test_4k 00:14:50.643 ************************************ 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=96212 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 96212 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 96212 ']' 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.643 21:47:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:50.903 [2024-11-27 21:47:13.779816] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:50.903 [2024-11-27 21:47:13.780262] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96212 ] 00:14:50.903 [2024-11-27 21:47:13.935666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.903 [2024-11-27 21:47:13.960897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.903 [2024-11-27 21:47:14.003621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:50.903 [2024-11-27 21:47:14.003747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:51.840 malloc1 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.840 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:51.840 [2024-11-27 21:47:14.643399] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:51.841 [2024-11-27 21:47:14.643549] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.841 [2024-11-27 21:47:14.643599] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:51.841 [2024-11-27 21:47:14.643647] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.841 [2024-11-27 21:47:14.645841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.841 [2024-11-27 21:47:14.645931] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:51.841 pt1 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:51.841 malloc2 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:51.841 [2024-11-27 21:47:14.676154] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:51.841 [2024-11-27 21:47:14.676272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.841 [2024-11-27 21:47:14.676309] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:51.841 [2024-11-27 21:47:14.676339] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.841 [2024-11-27 21:47:14.678383] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.841 [2024-11-27 21:47:14.678472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:51.841 pt2 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:51.841 [2024-11-27 21:47:14.688169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:51.841 [2024-11-27 21:47:14.690113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:51.841 [2024-11-27 21:47:14.690304] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:51.841 [2024-11-27 21:47:14.690354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:51.841 [2024-11-27 21:47:14.690689] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:14:51.841 [2024-11-27 21:47:14.690893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:51.841 [2024-11-27 21:47:14.690939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:51.841 [2024-11-27 21:47:14.691128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.841 "name": "raid_bdev1", 00:14:51.841 "uuid": "b3d5a7a1-4f83-422c-8aa2-552f31e8b256", 00:14:51.841 "strip_size_kb": 0, 00:14:51.841 "state": "online", 00:14:51.841 "raid_level": "raid1", 00:14:51.841 "superblock": true, 00:14:51.841 "num_base_bdevs": 2, 00:14:51.841 "num_base_bdevs_discovered": 2, 00:14:51.841 "num_base_bdevs_operational": 2, 00:14:51.841 "base_bdevs_list": [ 00:14:51.841 { 00:14:51.841 "name": "pt1", 00:14:51.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:51.841 "is_configured": true, 00:14:51.841 "data_offset": 256, 00:14:51.841 "data_size": 7936 00:14:51.841 }, 00:14:51.841 { 00:14:51.841 "name": "pt2", 00:14:51.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:51.841 "is_configured": true, 00:14:51.841 "data_offset": 256, 00:14:51.841 "data_size": 7936 00:14:51.841 } 00:14:51.841 ] 00:14:51.841 }' 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.841 21:47:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.101 [2024-11-27 21:47:15.079818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:52.101 "name": "raid_bdev1", 00:14:52.101 "aliases": [ 00:14:52.101 "b3d5a7a1-4f83-422c-8aa2-552f31e8b256" 00:14:52.101 ], 00:14:52.101 "product_name": "Raid Volume", 00:14:52.101 "block_size": 4096, 00:14:52.101 "num_blocks": 7936, 00:14:52.101 "uuid": "b3d5a7a1-4f83-422c-8aa2-552f31e8b256", 00:14:52.101 "assigned_rate_limits": { 00:14:52.101 "rw_ios_per_sec": 0, 00:14:52.101 "rw_mbytes_per_sec": 0, 00:14:52.101 "r_mbytes_per_sec": 0, 00:14:52.101 "w_mbytes_per_sec": 0 00:14:52.101 }, 00:14:52.101 "claimed": false, 00:14:52.101 "zoned": false, 00:14:52.101 "supported_io_types": { 00:14:52.101 "read": true, 00:14:52.101 "write": true, 00:14:52.101 "unmap": false, 00:14:52.101 "flush": false, 00:14:52.101 "reset": true, 00:14:52.101 "nvme_admin": false, 00:14:52.101 "nvme_io": false, 00:14:52.101 "nvme_io_md": false, 00:14:52.101 "write_zeroes": true, 00:14:52.101 "zcopy": false, 00:14:52.101 "get_zone_info": false, 00:14:52.101 "zone_management": false, 00:14:52.101 "zone_append": false, 00:14:52.101 "compare": false, 00:14:52.101 "compare_and_write": false, 00:14:52.101 "abort": false, 00:14:52.101 "seek_hole": false, 00:14:52.101 "seek_data": false, 00:14:52.101 "copy": false, 00:14:52.101 "nvme_iov_md": false 00:14:52.101 }, 00:14:52.101 "memory_domains": [ 00:14:52.101 { 00:14:52.101 "dma_device_id": "system", 00:14:52.101 "dma_device_type": 1 00:14:52.101 }, 00:14:52.101 { 00:14:52.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.101 "dma_device_type": 2 00:14:52.101 }, 00:14:52.101 { 00:14:52.101 "dma_device_id": "system", 00:14:52.101 "dma_device_type": 1 00:14:52.101 }, 00:14:52.101 { 00:14:52.101 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.101 "dma_device_type": 2 00:14:52.101 } 00:14:52.101 ], 00:14:52.101 "driver_specific": { 00:14:52.101 "raid": { 00:14:52.101 "uuid": "b3d5a7a1-4f83-422c-8aa2-552f31e8b256", 00:14:52.101 "strip_size_kb": 0, 00:14:52.101 "state": "online", 00:14:52.101 "raid_level": "raid1", 00:14:52.101 "superblock": true, 00:14:52.101 "num_base_bdevs": 2, 00:14:52.101 "num_base_bdevs_discovered": 2, 00:14:52.101 "num_base_bdevs_operational": 2, 00:14:52.101 "base_bdevs_list": [ 00:14:52.101 { 00:14:52.101 "name": "pt1", 00:14:52.101 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:52.101 "is_configured": true, 00:14:52.101 "data_offset": 256, 00:14:52.101 "data_size": 7936 00:14:52.101 }, 00:14:52.101 { 00:14:52.101 "name": "pt2", 00:14:52.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.101 "is_configured": true, 00:14:52.101 "data_offset": 256, 00:14:52.101 "data_size": 7936 00:14:52.101 } 00:14:52.101 ] 00:14:52.101 } 00:14:52.101 } 00:14:52.101 }' 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:52.101 pt2' 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.101 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:52.362 [2024-11-27 21:47:15.303342] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b3d5a7a1-4f83-422c-8aa2-552f31e8b256 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z b3d5a7a1-4f83-422c-8aa2-552f31e8b256 ']' 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.362 [2024-11-27 21:47:15.347050] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:52.362 [2024-11-27 21:47:15.347075] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:52.362 [2024-11-27 21:47:15.347146] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:52.362 [2024-11-27 21:47:15.347202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:52.362 [2024-11-27 21:47:15.347210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.362 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.624 [2024-11-27 21:47:15.482911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:52.624 [2024-11-27 21:47:15.484944] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:52.624 [2024-11-27 21:47:15.485088] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:52.624 [2024-11-27 21:47:15.485185] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:52.624 [2024-11-27 21:47:15.485255] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:52.624 [2024-11-27 21:47:15.485267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:14:52.624 request: 00:14:52.624 { 00:14:52.624 "name": "raid_bdev1", 00:14:52.624 "raid_level": "raid1", 00:14:52.624 "base_bdevs": [ 00:14:52.624 "malloc1", 00:14:52.624 "malloc2" 00:14:52.624 ], 00:14:52.624 "superblock": false, 00:14:52.624 "method": "bdev_raid_create", 00:14:52.624 "req_id": 1 00:14:52.624 } 00:14:52.624 Got JSON-RPC error response 00:14:52.624 response: 00:14:52.624 { 00:14:52.624 "code": -17, 00:14:52.624 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:52.624 } 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.624 [2024-11-27 21:47:15.550769] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:52.624 [2024-11-27 21:47:15.550835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.624 [2024-11-27 21:47:15.550854] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:52.624 [2024-11-27 21:47:15.550863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.624 [2024-11-27 21:47:15.552982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.624 [2024-11-27 21:47:15.553016] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:52.624 [2024-11-27 21:47:15.553075] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:52.624 [2024-11-27 21:47:15.553111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:52.624 pt1 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.624 "name": "raid_bdev1", 00:14:52.624 "uuid": "b3d5a7a1-4f83-422c-8aa2-552f31e8b256", 00:14:52.624 "strip_size_kb": 0, 00:14:52.624 "state": "configuring", 00:14:52.624 "raid_level": "raid1", 00:14:52.624 "superblock": true, 00:14:52.624 "num_base_bdevs": 2, 00:14:52.624 "num_base_bdevs_discovered": 1, 00:14:52.624 "num_base_bdevs_operational": 2, 00:14:52.624 "base_bdevs_list": [ 00:14:52.624 { 00:14:52.624 "name": "pt1", 00:14:52.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:52.624 "is_configured": true, 00:14:52.624 "data_offset": 256, 00:14:52.624 "data_size": 7936 00:14:52.624 }, 00:14:52.624 { 00:14:52.624 "name": null, 00:14:52.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:52.624 "is_configured": false, 00:14:52.624 "data_offset": 256, 00:14:52.624 "data_size": 7936 00:14:52.624 } 00:14:52.624 ] 00:14:52.624 }' 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.624 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:52.885 [2024-11-27 21:47:15.986018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:52.885 [2024-11-27 21:47:15.986137] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.885 [2024-11-27 21:47:15.986173] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:52.885 [2024-11-27 21:47:15.986200] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.885 [2024-11-27 21:47:15.986604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.885 [2024-11-27 21:47:15.986660] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:52.885 [2024-11-27 21:47:15.986771] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:52.885 [2024-11-27 21:47:15.986852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:52.885 [2024-11-27 21:47:15.986986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:14:52.885 [2024-11-27 21:47:15.987026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:52.885 [2024-11-27 21:47:15.987304] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:14:52.885 [2024-11-27 21:47:15.987458] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:14:52.885 [2024-11-27 21:47:15.987503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:14:52.885 [2024-11-27 21:47:15.987664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.885 pt2 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.885 21:47:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.885 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.145 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.145 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.145 "name": "raid_bdev1", 00:14:53.145 "uuid": "b3d5a7a1-4f83-422c-8aa2-552f31e8b256", 00:14:53.145 "strip_size_kb": 0, 00:14:53.145 "state": "online", 00:14:53.145 "raid_level": "raid1", 00:14:53.145 "superblock": true, 00:14:53.145 "num_base_bdevs": 2, 00:14:53.145 "num_base_bdevs_discovered": 2, 00:14:53.145 "num_base_bdevs_operational": 2, 00:14:53.145 "base_bdevs_list": [ 00:14:53.145 { 00:14:53.145 "name": "pt1", 00:14:53.145 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.145 "is_configured": true, 00:14:53.145 "data_offset": 256, 00:14:53.145 "data_size": 7936 00:14:53.145 }, 00:14:53.145 { 00:14:53.145 "name": "pt2", 00:14:53.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.145 "is_configured": true, 00:14:53.145 "data_offset": 256, 00:14:53.145 "data_size": 7936 00:14:53.145 } 00:14:53.145 ] 00:14:53.145 }' 00:14:53.145 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.145 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.406 [2024-11-27 21:47:16.429526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:53.406 "name": "raid_bdev1", 00:14:53.406 "aliases": [ 00:14:53.406 "b3d5a7a1-4f83-422c-8aa2-552f31e8b256" 00:14:53.406 ], 00:14:53.406 "product_name": "Raid Volume", 00:14:53.406 "block_size": 4096, 00:14:53.406 "num_blocks": 7936, 00:14:53.406 "uuid": "b3d5a7a1-4f83-422c-8aa2-552f31e8b256", 00:14:53.406 "assigned_rate_limits": { 00:14:53.406 "rw_ios_per_sec": 0, 00:14:53.406 "rw_mbytes_per_sec": 0, 00:14:53.406 "r_mbytes_per_sec": 0, 00:14:53.406 "w_mbytes_per_sec": 0 00:14:53.406 }, 00:14:53.406 "claimed": false, 00:14:53.406 "zoned": false, 00:14:53.406 "supported_io_types": { 00:14:53.406 "read": true, 00:14:53.406 "write": true, 00:14:53.406 "unmap": false, 00:14:53.406 "flush": false, 00:14:53.406 "reset": true, 00:14:53.406 "nvme_admin": false, 00:14:53.406 "nvme_io": false, 00:14:53.406 "nvme_io_md": false, 00:14:53.406 "write_zeroes": true, 00:14:53.406 "zcopy": false, 00:14:53.406 "get_zone_info": false, 00:14:53.406 "zone_management": false, 00:14:53.406 "zone_append": false, 00:14:53.406 "compare": false, 00:14:53.406 "compare_and_write": false, 00:14:53.406 "abort": false, 00:14:53.406 "seek_hole": false, 00:14:53.406 "seek_data": false, 00:14:53.406 "copy": false, 00:14:53.406 "nvme_iov_md": false 00:14:53.406 }, 00:14:53.406 "memory_domains": [ 00:14:53.406 { 00:14:53.406 "dma_device_id": "system", 00:14:53.406 "dma_device_type": 1 00:14:53.406 }, 00:14:53.406 { 00:14:53.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.406 "dma_device_type": 2 00:14:53.406 }, 00:14:53.406 { 00:14:53.406 "dma_device_id": "system", 00:14:53.406 "dma_device_type": 1 00:14:53.406 }, 00:14:53.406 { 00:14:53.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:53.406 "dma_device_type": 2 00:14:53.406 } 00:14:53.406 ], 00:14:53.406 "driver_specific": { 00:14:53.406 "raid": { 00:14:53.406 "uuid": "b3d5a7a1-4f83-422c-8aa2-552f31e8b256", 00:14:53.406 "strip_size_kb": 0, 00:14:53.406 "state": "online", 00:14:53.406 "raid_level": "raid1", 00:14:53.406 "superblock": true, 00:14:53.406 "num_base_bdevs": 2, 00:14:53.406 "num_base_bdevs_discovered": 2, 00:14:53.406 "num_base_bdevs_operational": 2, 00:14:53.406 "base_bdevs_list": [ 00:14:53.406 { 00:14:53.406 "name": "pt1", 00:14:53.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:53.406 "is_configured": true, 00:14:53.406 "data_offset": 256, 00:14:53.406 "data_size": 7936 00:14:53.406 }, 00:14:53.406 { 00:14:53.406 "name": "pt2", 00:14:53.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.406 "is_configured": true, 00:14:53.406 "data_offset": 256, 00:14:53.406 "data_size": 7936 00:14:53.406 } 00:14:53.406 ] 00:14:53.406 } 00:14:53.406 } 00:14:53.406 }' 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:53.406 pt2' 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.406 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.666 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.666 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:53.666 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:53.666 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:53.666 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:53.666 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.666 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.666 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:53.666 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.667 [2024-11-27 21:47:16.625177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' b3d5a7a1-4f83-422c-8aa2-552f31e8b256 '!=' b3d5a7a1-4f83-422c-8aa2-552f31e8b256 ']' 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.667 [2024-11-27 21:47:16.672918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.667 "name": "raid_bdev1", 00:14:53.667 "uuid": "b3d5a7a1-4f83-422c-8aa2-552f31e8b256", 00:14:53.667 "strip_size_kb": 0, 00:14:53.667 "state": "online", 00:14:53.667 "raid_level": "raid1", 00:14:53.667 "superblock": true, 00:14:53.667 "num_base_bdevs": 2, 00:14:53.667 "num_base_bdevs_discovered": 1, 00:14:53.667 "num_base_bdevs_operational": 1, 00:14:53.667 "base_bdevs_list": [ 00:14:53.667 { 00:14:53.667 "name": null, 00:14:53.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.667 "is_configured": false, 00:14:53.667 "data_offset": 0, 00:14:53.667 "data_size": 7936 00:14:53.667 }, 00:14:53.667 { 00:14:53.667 "name": "pt2", 00:14:53.667 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:53.667 "is_configured": true, 00:14:53.667 "data_offset": 256, 00:14:53.667 "data_size": 7936 00:14:53.667 } 00:14:53.667 ] 00:14:53.667 }' 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.667 21:47:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.236 [2024-11-27 21:47:17.116172] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.236 [2024-11-27 21:47:17.116249] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.236 [2024-11-27 21:47:17.116362] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.236 [2024-11-27 21:47:17.116459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.236 [2024-11-27 21:47:17.116503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.236 [2024-11-27 21:47:17.184058] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:54.236 [2024-11-27 21:47:17.184116] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.236 [2024-11-27 21:47:17.184133] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:14:54.236 [2024-11-27 21:47:17.184140] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.236 [2024-11-27 21:47:17.186236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.236 [2024-11-27 21:47:17.186272] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:54.236 [2024-11-27 21:47:17.186340] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:54.236 [2024-11-27 21:47:17.186367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:54.236 [2024-11-27 21:47:17.186441] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:14:54.236 [2024-11-27 21:47:17.186448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:54.236 [2024-11-27 21:47:17.186677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:14:54.236 [2024-11-27 21:47:17.186781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:14:54.236 [2024-11-27 21:47:17.186801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:14:54.236 [2024-11-27 21:47:17.186925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.236 pt2 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.236 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.237 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.237 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.237 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.237 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.237 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.237 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.237 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.237 "name": "raid_bdev1", 00:14:54.237 "uuid": "b3d5a7a1-4f83-422c-8aa2-552f31e8b256", 00:14:54.237 "strip_size_kb": 0, 00:14:54.237 "state": "online", 00:14:54.237 "raid_level": "raid1", 00:14:54.237 "superblock": true, 00:14:54.237 "num_base_bdevs": 2, 00:14:54.237 "num_base_bdevs_discovered": 1, 00:14:54.237 "num_base_bdevs_operational": 1, 00:14:54.237 "base_bdevs_list": [ 00:14:54.237 { 00:14:54.237 "name": null, 00:14:54.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.237 "is_configured": false, 00:14:54.237 "data_offset": 256, 00:14:54.237 "data_size": 7936 00:14:54.237 }, 00:14:54.237 { 00:14:54.237 "name": "pt2", 00:14:54.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.237 "is_configured": true, 00:14:54.237 "data_offset": 256, 00:14:54.237 "data_size": 7936 00:14:54.237 } 00:14:54.237 ] 00:14:54.237 }' 00:14:54.237 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.237 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.806 [2024-11-27 21:47:17.655270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.806 [2024-11-27 21:47:17.655294] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:54.806 [2024-11-27 21:47:17.655344] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:54.806 [2024-11-27 21:47:17.655379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:54.806 [2024-11-27 21:47:17.655388] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.806 [2024-11-27 21:47:17.719142] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:54.806 [2024-11-27 21:47:17.719200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.806 [2024-11-27 21:47:17.719218] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:14:54.806 [2024-11-27 21:47:17.719230] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.806 [2024-11-27 21:47:17.721290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.806 [2024-11-27 21:47:17.721401] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:54.806 [2024-11-27 21:47:17.721480] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:54.806 [2024-11-27 21:47:17.721541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:54.806 [2024-11-27 21:47:17.721646] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:54.806 [2024-11-27 21:47:17.721667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:54.806 [2024-11-27 21:47:17.721681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:14:54.806 [2024-11-27 21:47:17.721712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:54.806 [2024-11-27 21:47:17.721777] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:14:54.806 [2024-11-27 21:47:17.721789] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:54.806 [2024-11-27 21:47:17.722017] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:54.806 [2024-11-27 21:47:17.722153] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:14:54.806 [2024-11-27 21:47:17.722172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:14:54.806 [2024-11-27 21:47:17.722288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.806 pt1 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.806 "name": "raid_bdev1", 00:14:54.806 "uuid": "b3d5a7a1-4f83-422c-8aa2-552f31e8b256", 00:14:54.806 "strip_size_kb": 0, 00:14:54.806 "state": "online", 00:14:54.806 "raid_level": "raid1", 00:14:54.806 "superblock": true, 00:14:54.806 "num_base_bdevs": 2, 00:14:54.806 "num_base_bdevs_discovered": 1, 00:14:54.806 "num_base_bdevs_operational": 1, 00:14:54.806 "base_bdevs_list": [ 00:14:54.806 { 00:14:54.806 "name": null, 00:14:54.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.806 "is_configured": false, 00:14:54.806 "data_offset": 256, 00:14:54.806 "data_size": 7936 00:14:54.806 }, 00:14:54.806 { 00:14:54.806 "name": "pt2", 00:14:54.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:54.806 "is_configured": true, 00:14:54.806 "data_offset": 256, 00:14:54.806 "data_size": 7936 00:14:54.806 } 00:14:54.806 ] 00:14:54.806 }' 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.806 21:47:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.066 21:47:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:55.066 21:47:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:55.066 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.066 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.066 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:55.326 [2024-11-27 21:47:18.198704] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' b3d5a7a1-4f83-422c-8aa2-552f31e8b256 '!=' b3d5a7a1-4f83-422c-8aa2-552f31e8b256 ']' 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 96212 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 96212 ']' 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 96212 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96212 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96212' 00:14:55.326 killing process with pid 96212 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 96212 00:14:55.326 [2024-11-27 21:47:18.283338] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:55.326 [2024-11-27 21:47:18.283396] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.326 [2024-11-27 21:47:18.283434] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.326 [2024-11-27 21:47:18.283441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:14:55.326 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 96212 00:14:55.326 [2024-11-27 21:47:18.306131] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:55.586 ************************************ 00:14:55.586 END TEST raid_superblock_test_4k 00:14:55.586 ************************************ 00:14:55.586 21:47:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:14:55.586 00:14:55.586 real 0m4.819s 00:14:55.586 user 0m7.889s 00:14:55.586 sys 0m1.058s 00:14:55.586 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.586 21:47:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.586 21:47:18 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:14:55.586 21:47:18 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:14:55.586 21:47:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:55.586 21:47:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.586 21:47:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:55.586 ************************************ 00:14:55.586 START TEST raid_rebuild_test_sb_4k 00:14:55.586 ************************************ 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=96518 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 96518 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 96518 ']' 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:55.586 21:47:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:55.586 [2024-11-27 21:47:18.687388] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:14:55.586 [2024-11-27 21:47:18.687584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:55.586 Zero copy mechanism will not be used. 00:14:55.586 -allocations --file-prefix=spdk_pid96518 ] 00:14:55.846 [2024-11-27 21:47:18.840904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.846 [2024-11-27 21:47:18.865936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.846 [2024-11-27 21:47:18.908758] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.846 [2024-11-27 21:47:18.908927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:56.413 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:56.413 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:14:56.413 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:56.413 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:14:56.413 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.413 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.413 BaseBdev1_malloc 00:14:56.413 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.413 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:56.413 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.413 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.413 [2024-11-27 21:47:19.520587] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:56.413 [2024-11-27 21:47:19.520726] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.413 [2024-11-27 21:47:19.520769] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:14:56.413 [2024-11-27 21:47:19.520815] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.413 [2024-11-27 21:47:19.522961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.413 [2024-11-27 21:47:19.523045] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:56.413 BaseBdev1 00:14:56.413 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.413 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:56.413 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:14:56.414 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.414 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.672 BaseBdev2_malloc 00:14:56.672 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.672 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:56.672 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.672 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.672 [2024-11-27 21:47:19.549258] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:56.672 [2024-11-27 21:47:19.549325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.672 [2024-11-27 21:47:19.549347] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:56.672 [2024-11-27 21:47:19.549355] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.672 [2024-11-27 21:47:19.551435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.673 [2024-11-27 21:47:19.551476] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:56.673 BaseBdev2 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.673 spare_malloc 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.673 spare_delay 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.673 [2024-11-27 21:47:19.589705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:56.673 [2024-11-27 21:47:19.589754] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.673 [2024-11-27 21:47:19.589771] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:56.673 [2024-11-27 21:47:19.589779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.673 [2024-11-27 21:47:19.591844] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.673 [2024-11-27 21:47:19.591877] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:56.673 spare 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.673 [2024-11-27 21:47:19.601728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.673 [2024-11-27 21:47:19.603569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:56.673 [2024-11-27 21:47:19.603846] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:14:56.673 [2024-11-27 21:47:19.603862] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:14:56.673 [2024-11-27 21:47:19.604147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:14:56.673 [2024-11-27 21:47:19.604292] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:14:56.673 [2024-11-27 21:47:19.604310] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:14:56.673 [2024-11-27 21:47:19.604420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.673 "name": "raid_bdev1", 00:14:56.673 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:14:56.673 "strip_size_kb": 0, 00:14:56.673 "state": "online", 00:14:56.673 "raid_level": "raid1", 00:14:56.673 "superblock": true, 00:14:56.673 "num_base_bdevs": 2, 00:14:56.673 "num_base_bdevs_discovered": 2, 00:14:56.673 "num_base_bdevs_operational": 2, 00:14:56.673 "base_bdevs_list": [ 00:14:56.673 { 00:14:56.673 "name": "BaseBdev1", 00:14:56.673 "uuid": "ead52749-e899-541e-bce9-c0f37be63598", 00:14:56.673 "is_configured": true, 00:14:56.673 "data_offset": 256, 00:14:56.673 "data_size": 7936 00:14:56.673 }, 00:14:56.673 { 00:14:56.673 "name": "BaseBdev2", 00:14:56.673 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:14:56.673 "is_configured": true, 00:14:56.673 "data_offset": 256, 00:14:56.673 "data_size": 7936 00:14:56.673 } 00:14:56.673 ] 00:14:56.673 }' 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.673 21:47:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.242 [2024-11-27 21:47:20.077140] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:57.242 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:57.242 [2024-11-27 21:47:20.356490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:14:57.502 /dev/nbd0 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:57.502 1+0 records in 00:14:57.502 1+0 records out 00:14:57.502 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431807 s, 9.5 MB/s 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:57.502 21:47:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:14:58.070 7936+0 records in 00:14:58.070 7936+0 records out 00:14:58.070 32505856 bytes (33 MB, 31 MiB) copied, 0.632792 s, 51.4 MB/s 00:14:58.070 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:58.070 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.070 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:58.070 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:58.070 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:14:58.070 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:58.070 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:58.330 [2024-11-27 21:47:21.269260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.330 [2024-11-27 21:47:21.281350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.330 "name": "raid_bdev1", 00:14:58.330 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:14:58.330 "strip_size_kb": 0, 00:14:58.330 "state": "online", 00:14:58.330 "raid_level": "raid1", 00:14:58.330 "superblock": true, 00:14:58.330 "num_base_bdevs": 2, 00:14:58.330 "num_base_bdevs_discovered": 1, 00:14:58.330 "num_base_bdevs_operational": 1, 00:14:58.330 "base_bdevs_list": [ 00:14:58.330 { 00:14:58.330 "name": null, 00:14:58.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.330 "is_configured": false, 00:14:58.330 "data_offset": 0, 00:14:58.330 "data_size": 7936 00:14:58.330 }, 00:14:58.330 { 00:14:58.330 "name": "BaseBdev2", 00:14:58.330 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:14:58.330 "is_configured": true, 00:14:58.330 "data_offset": 256, 00:14:58.330 "data_size": 7936 00:14:58.330 } 00:14:58.330 ] 00:14:58.330 }' 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.330 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.898 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:58.898 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.898 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:58.898 [2024-11-27 21:47:21.724611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:58.898 [2024-11-27 21:47:21.729645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:14:58.898 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.898 21:47:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:58.898 [2024-11-27 21:47:21.731543] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.837 "name": "raid_bdev1", 00:14:59.837 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:14:59.837 "strip_size_kb": 0, 00:14:59.837 "state": "online", 00:14:59.837 "raid_level": "raid1", 00:14:59.837 "superblock": true, 00:14:59.837 "num_base_bdevs": 2, 00:14:59.837 "num_base_bdevs_discovered": 2, 00:14:59.837 "num_base_bdevs_operational": 2, 00:14:59.837 "process": { 00:14:59.837 "type": "rebuild", 00:14:59.837 "target": "spare", 00:14:59.837 "progress": { 00:14:59.837 "blocks": 2560, 00:14:59.837 "percent": 32 00:14:59.837 } 00:14:59.837 }, 00:14:59.837 "base_bdevs_list": [ 00:14:59.837 { 00:14:59.837 "name": "spare", 00:14:59.837 "uuid": "b70fe958-fd80-5e38-b292-5b0c8e4161d6", 00:14:59.837 "is_configured": true, 00:14:59.837 "data_offset": 256, 00:14:59.837 "data_size": 7936 00:14:59.837 }, 00:14:59.837 { 00:14:59.837 "name": "BaseBdev2", 00:14:59.837 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:14:59.837 "is_configured": true, 00:14:59.837 "data_offset": 256, 00:14:59.837 "data_size": 7936 00:14:59.837 } 00:14:59.837 ] 00:14:59.837 }' 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.837 [2024-11-27 21:47:22.892182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:59.837 [2024-11-27 21:47:22.936054] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:59.837 [2024-11-27 21:47:22.936108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.837 [2024-11-27 21:47:22.936126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:59.837 [2024-11-27 21:47:22.936133] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:14:59.837 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.097 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.097 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.097 "name": "raid_bdev1", 00:15:00.097 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:00.097 "strip_size_kb": 0, 00:15:00.097 "state": "online", 00:15:00.097 "raid_level": "raid1", 00:15:00.097 "superblock": true, 00:15:00.097 "num_base_bdevs": 2, 00:15:00.097 "num_base_bdevs_discovered": 1, 00:15:00.097 "num_base_bdevs_operational": 1, 00:15:00.097 "base_bdevs_list": [ 00:15:00.097 { 00:15:00.097 "name": null, 00:15:00.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.097 "is_configured": false, 00:15:00.097 "data_offset": 0, 00:15:00.097 "data_size": 7936 00:15:00.097 }, 00:15:00.097 { 00:15:00.097 "name": "BaseBdev2", 00:15:00.097 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:00.097 "is_configured": true, 00:15:00.097 "data_offset": 256, 00:15:00.097 "data_size": 7936 00:15:00.097 } 00:15:00.097 ] 00:15:00.097 }' 00:15:00.097 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.097 21:47:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.356 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:00.356 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.356 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:00.356 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:00.356 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.356 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.356 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.356 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.356 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.356 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.357 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.357 "name": "raid_bdev1", 00:15:00.357 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:00.357 "strip_size_kb": 0, 00:15:00.357 "state": "online", 00:15:00.357 "raid_level": "raid1", 00:15:00.357 "superblock": true, 00:15:00.357 "num_base_bdevs": 2, 00:15:00.357 "num_base_bdevs_discovered": 1, 00:15:00.357 "num_base_bdevs_operational": 1, 00:15:00.357 "base_bdevs_list": [ 00:15:00.357 { 00:15:00.357 "name": null, 00:15:00.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.357 "is_configured": false, 00:15:00.357 "data_offset": 0, 00:15:00.357 "data_size": 7936 00:15:00.357 }, 00:15:00.357 { 00:15:00.357 "name": "BaseBdev2", 00:15:00.357 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:00.357 "is_configured": true, 00:15:00.357 "data_offset": 256, 00:15:00.357 "data_size": 7936 00:15:00.357 } 00:15:00.357 ] 00:15:00.357 }' 00:15:00.357 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.616 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:00.616 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.616 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:00.616 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:00.616 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.616 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:00.616 [2024-11-27 21:47:23.543907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:00.616 [2024-11-27 21:47:23.548311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:15:00.616 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.616 21:47:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:00.616 [2024-11-27 21:47:23.550254] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:01.554 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.554 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.554 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.554 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.554 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.554 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.554 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.554 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.554 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.554 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.554 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.554 "name": "raid_bdev1", 00:15:01.554 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:01.554 "strip_size_kb": 0, 00:15:01.554 "state": "online", 00:15:01.554 "raid_level": "raid1", 00:15:01.554 "superblock": true, 00:15:01.554 "num_base_bdevs": 2, 00:15:01.554 "num_base_bdevs_discovered": 2, 00:15:01.554 "num_base_bdevs_operational": 2, 00:15:01.554 "process": { 00:15:01.554 "type": "rebuild", 00:15:01.554 "target": "spare", 00:15:01.554 "progress": { 00:15:01.554 "blocks": 2560, 00:15:01.554 "percent": 32 00:15:01.554 } 00:15:01.554 }, 00:15:01.554 "base_bdevs_list": [ 00:15:01.554 { 00:15:01.554 "name": "spare", 00:15:01.554 "uuid": "b70fe958-fd80-5e38-b292-5b0c8e4161d6", 00:15:01.554 "is_configured": true, 00:15:01.554 "data_offset": 256, 00:15:01.554 "data_size": 7936 00:15:01.554 }, 00:15:01.554 { 00:15:01.554 "name": "BaseBdev2", 00:15:01.554 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:01.554 "is_configured": true, 00:15:01.554 "data_offset": 256, 00:15:01.554 "data_size": 7936 00:15:01.554 } 00:15:01.554 ] 00:15:01.554 }' 00:15:01.554 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.554 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.554 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:01.814 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=553 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.814 "name": "raid_bdev1", 00:15:01.814 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:01.814 "strip_size_kb": 0, 00:15:01.814 "state": "online", 00:15:01.814 "raid_level": "raid1", 00:15:01.814 "superblock": true, 00:15:01.814 "num_base_bdevs": 2, 00:15:01.814 "num_base_bdevs_discovered": 2, 00:15:01.814 "num_base_bdevs_operational": 2, 00:15:01.814 "process": { 00:15:01.814 "type": "rebuild", 00:15:01.814 "target": "spare", 00:15:01.814 "progress": { 00:15:01.814 "blocks": 2816, 00:15:01.814 "percent": 35 00:15:01.814 } 00:15:01.814 }, 00:15:01.814 "base_bdevs_list": [ 00:15:01.814 { 00:15:01.814 "name": "spare", 00:15:01.814 "uuid": "b70fe958-fd80-5e38-b292-5b0c8e4161d6", 00:15:01.814 "is_configured": true, 00:15:01.814 "data_offset": 256, 00:15:01.814 "data_size": 7936 00:15:01.814 }, 00:15:01.814 { 00:15:01.814 "name": "BaseBdev2", 00:15:01.814 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:01.814 "is_configured": true, 00:15:01.814 "data_offset": 256, 00:15:01.814 "data_size": 7936 00:15:01.814 } 00:15:01.814 ] 00:15:01.814 }' 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.814 21:47:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:02.753 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.753 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.753 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.753 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.753 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.753 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.753 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.753 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.753 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.753 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:02.753 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.013 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:03.013 "name": "raid_bdev1", 00:15:03.013 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:03.013 "strip_size_kb": 0, 00:15:03.013 "state": "online", 00:15:03.013 "raid_level": "raid1", 00:15:03.013 "superblock": true, 00:15:03.013 "num_base_bdevs": 2, 00:15:03.013 "num_base_bdevs_discovered": 2, 00:15:03.013 "num_base_bdevs_operational": 2, 00:15:03.013 "process": { 00:15:03.013 "type": "rebuild", 00:15:03.013 "target": "spare", 00:15:03.013 "progress": { 00:15:03.013 "blocks": 5632, 00:15:03.013 "percent": 70 00:15:03.013 } 00:15:03.013 }, 00:15:03.013 "base_bdevs_list": [ 00:15:03.013 { 00:15:03.013 "name": "spare", 00:15:03.013 "uuid": "b70fe958-fd80-5e38-b292-5b0c8e4161d6", 00:15:03.013 "is_configured": true, 00:15:03.013 "data_offset": 256, 00:15:03.013 "data_size": 7936 00:15:03.013 }, 00:15:03.013 { 00:15:03.013 "name": "BaseBdev2", 00:15:03.013 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:03.013 "is_configured": true, 00:15:03.013 "data_offset": 256, 00:15:03.013 "data_size": 7936 00:15:03.013 } 00:15:03.013 ] 00:15:03.013 }' 00:15:03.013 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:03.013 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:03.013 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:03.013 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:03.013 21:47:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:03.581 [2024-11-27 21:47:26.660502] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:03.581 [2024-11-27 21:47:26.660645] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:03.581 [2024-11-27 21:47:26.660788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.150 21:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:04.150 21:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.150 21:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.150 21:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.150 21:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.150 21:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.150 21:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.150 21:47:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.151 21:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.151 21:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.151 21:47:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.151 "name": "raid_bdev1", 00:15:04.151 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:04.151 "strip_size_kb": 0, 00:15:04.151 "state": "online", 00:15:04.151 "raid_level": "raid1", 00:15:04.151 "superblock": true, 00:15:04.151 "num_base_bdevs": 2, 00:15:04.151 "num_base_bdevs_discovered": 2, 00:15:04.151 "num_base_bdevs_operational": 2, 00:15:04.151 "base_bdevs_list": [ 00:15:04.151 { 00:15:04.151 "name": "spare", 00:15:04.151 "uuid": "b70fe958-fd80-5e38-b292-5b0c8e4161d6", 00:15:04.151 "is_configured": true, 00:15:04.151 "data_offset": 256, 00:15:04.151 "data_size": 7936 00:15:04.151 }, 00:15:04.151 { 00:15:04.151 "name": "BaseBdev2", 00:15:04.151 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:04.151 "is_configured": true, 00:15:04.151 "data_offset": 256, 00:15:04.151 "data_size": 7936 00:15:04.151 } 00:15:04.151 ] 00:15:04.151 }' 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.151 "name": "raid_bdev1", 00:15:04.151 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:04.151 "strip_size_kb": 0, 00:15:04.151 "state": "online", 00:15:04.151 "raid_level": "raid1", 00:15:04.151 "superblock": true, 00:15:04.151 "num_base_bdevs": 2, 00:15:04.151 "num_base_bdevs_discovered": 2, 00:15:04.151 "num_base_bdevs_operational": 2, 00:15:04.151 "base_bdevs_list": [ 00:15:04.151 { 00:15:04.151 "name": "spare", 00:15:04.151 "uuid": "b70fe958-fd80-5e38-b292-5b0c8e4161d6", 00:15:04.151 "is_configured": true, 00:15:04.151 "data_offset": 256, 00:15:04.151 "data_size": 7936 00:15:04.151 }, 00:15:04.151 { 00:15:04.151 "name": "BaseBdev2", 00:15:04.151 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:04.151 "is_configured": true, 00:15:04.151 "data_offset": 256, 00:15:04.151 "data_size": 7936 00:15:04.151 } 00:15:04.151 ] 00:15:04.151 }' 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.151 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.410 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.410 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.410 "name": "raid_bdev1", 00:15:04.410 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:04.410 "strip_size_kb": 0, 00:15:04.410 "state": "online", 00:15:04.410 "raid_level": "raid1", 00:15:04.410 "superblock": true, 00:15:04.410 "num_base_bdevs": 2, 00:15:04.410 "num_base_bdevs_discovered": 2, 00:15:04.410 "num_base_bdevs_operational": 2, 00:15:04.410 "base_bdevs_list": [ 00:15:04.410 { 00:15:04.410 "name": "spare", 00:15:04.410 "uuid": "b70fe958-fd80-5e38-b292-5b0c8e4161d6", 00:15:04.410 "is_configured": true, 00:15:04.410 "data_offset": 256, 00:15:04.410 "data_size": 7936 00:15:04.410 }, 00:15:04.410 { 00:15:04.410 "name": "BaseBdev2", 00:15:04.410 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:04.410 "is_configured": true, 00:15:04.410 "data_offset": 256, 00:15:04.410 "data_size": 7936 00:15:04.410 } 00:15:04.410 ] 00:15:04.410 }' 00:15:04.410 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.410 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.670 [2024-11-27 21:47:27.671230] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.670 [2024-11-27 21:47:27.671303] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:04.670 [2024-11-27 21:47:27.671424] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:04.670 [2024-11-27 21:47:27.671550] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:04.670 [2024-11-27 21:47:27.671600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:04.670 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:04.930 /dev/nbd0 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:04.930 1+0 records in 00:15:04.930 1+0 records out 00:15:04.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360145 s, 11.4 MB/s 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:04.930 21:47:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:05.189 /dev/nbd1 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:05.190 1+0 records in 00:15:05.190 1+0 records out 00:15:05.190 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000303904 s, 13.5 MB/s 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.190 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:05.450 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:05.450 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:05.450 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:05.450 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.450 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.450 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:05.450 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:05.450 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.450 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:05.450 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.710 [2024-11-27 21:47:28.714795] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:05.710 [2024-11-27 21:47:28.714883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.710 [2024-11-27 21:47:28.714903] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:05.710 [2024-11-27 21:47:28.714916] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.710 [2024-11-27 21:47:28.717125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.710 [2024-11-27 21:47:28.717202] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:05.710 [2024-11-27 21:47:28.717314] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:05.710 [2024-11-27 21:47:28.717396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.710 [2024-11-27 21:47:28.717600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.710 spare 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.710 [2024-11-27 21:47:28.817541] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:05.710 [2024-11-27 21:47:28.817565] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:05.710 [2024-11-27 21:47:28.817830] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:15:05.710 [2024-11-27 21:47:28.817985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:05.710 [2024-11-27 21:47:28.817997] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:05.710 [2024-11-27 21:47:28.818139] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.710 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.711 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.711 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.711 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.711 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.711 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.711 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:05.711 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.970 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.970 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.970 "name": "raid_bdev1", 00:15:05.970 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:05.970 "strip_size_kb": 0, 00:15:05.970 "state": "online", 00:15:05.970 "raid_level": "raid1", 00:15:05.970 "superblock": true, 00:15:05.970 "num_base_bdevs": 2, 00:15:05.970 "num_base_bdevs_discovered": 2, 00:15:05.970 "num_base_bdevs_operational": 2, 00:15:05.970 "base_bdevs_list": [ 00:15:05.970 { 00:15:05.970 "name": "spare", 00:15:05.970 "uuid": "b70fe958-fd80-5e38-b292-5b0c8e4161d6", 00:15:05.970 "is_configured": true, 00:15:05.970 "data_offset": 256, 00:15:05.970 "data_size": 7936 00:15:05.970 }, 00:15:05.970 { 00:15:05.970 "name": "BaseBdev2", 00:15:05.970 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:05.970 "is_configured": true, 00:15:05.970 "data_offset": 256, 00:15:05.970 "data_size": 7936 00:15:05.970 } 00:15:05.970 ] 00:15:05.970 }' 00:15:05.970 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.970 21:47:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.233 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.233 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.233 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.233 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.233 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.233 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.233 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.233 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.233 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.233 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.495 "name": "raid_bdev1", 00:15:06.495 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:06.495 "strip_size_kb": 0, 00:15:06.495 "state": "online", 00:15:06.495 "raid_level": "raid1", 00:15:06.495 "superblock": true, 00:15:06.495 "num_base_bdevs": 2, 00:15:06.495 "num_base_bdevs_discovered": 2, 00:15:06.495 "num_base_bdevs_operational": 2, 00:15:06.495 "base_bdevs_list": [ 00:15:06.495 { 00:15:06.495 "name": "spare", 00:15:06.495 "uuid": "b70fe958-fd80-5e38-b292-5b0c8e4161d6", 00:15:06.495 "is_configured": true, 00:15:06.495 "data_offset": 256, 00:15:06.495 "data_size": 7936 00:15:06.495 }, 00:15:06.495 { 00:15:06.495 "name": "BaseBdev2", 00:15:06.495 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:06.495 "is_configured": true, 00:15:06.495 "data_offset": 256, 00:15:06.495 "data_size": 7936 00:15:06.495 } 00:15:06.495 ] 00:15:06.495 }' 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.495 [2024-11-27 21:47:29.517461] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.495 "name": "raid_bdev1", 00:15:06.495 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:06.495 "strip_size_kb": 0, 00:15:06.495 "state": "online", 00:15:06.495 "raid_level": "raid1", 00:15:06.495 "superblock": true, 00:15:06.495 "num_base_bdevs": 2, 00:15:06.495 "num_base_bdevs_discovered": 1, 00:15:06.495 "num_base_bdevs_operational": 1, 00:15:06.495 "base_bdevs_list": [ 00:15:06.495 { 00:15:06.495 "name": null, 00:15:06.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.495 "is_configured": false, 00:15:06.495 "data_offset": 0, 00:15:06.495 "data_size": 7936 00:15:06.495 }, 00:15:06.495 { 00:15:06.495 "name": "BaseBdev2", 00:15:06.495 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:06.495 "is_configured": true, 00:15:06.495 "data_offset": 256, 00:15:06.495 "data_size": 7936 00:15:06.495 } 00:15:06.495 ] 00:15:06.495 }' 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.495 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.062 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:07.062 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.062 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.062 [2024-11-27 21:47:29.984649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.062 [2024-11-27 21:47:29.984806] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:07.062 [2024-11-27 21:47:29.984836] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:07.062 [2024-11-27 21:47:29.984886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.062 [2024-11-27 21:47:29.989826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:15:07.062 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.062 21:47:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:07.062 [2024-11-27 21:47:29.991585] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:07.996 21:47:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.996 21:47:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.996 21:47:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.996 21:47:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.996 21:47:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.996 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.996 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.996 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.996 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:07.996 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.996 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.996 "name": "raid_bdev1", 00:15:07.996 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:07.996 "strip_size_kb": 0, 00:15:07.996 "state": "online", 00:15:07.996 "raid_level": "raid1", 00:15:07.996 "superblock": true, 00:15:07.996 "num_base_bdevs": 2, 00:15:07.996 "num_base_bdevs_discovered": 2, 00:15:07.996 "num_base_bdevs_operational": 2, 00:15:07.996 "process": { 00:15:07.996 "type": "rebuild", 00:15:07.996 "target": "spare", 00:15:07.996 "progress": { 00:15:07.996 "blocks": 2560, 00:15:07.996 "percent": 32 00:15:07.996 } 00:15:07.996 }, 00:15:07.996 "base_bdevs_list": [ 00:15:07.996 { 00:15:07.996 "name": "spare", 00:15:07.996 "uuid": "b70fe958-fd80-5e38-b292-5b0c8e4161d6", 00:15:07.996 "is_configured": true, 00:15:07.996 "data_offset": 256, 00:15:07.996 "data_size": 7936 00:15:07.996 }, 00:15:07.996 { 00:15:07.996 "name": "BaseBdev2", 00:15:07.996 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:07.996 "is_configured": true, 00:15:07.996 "data_offset": 256, 00:15:07.996 "data_size": 7936 00:15:07.996 } 00:15:07.996 ] 00:15:07.996 }' 00:15:07.996 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.996 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.996 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.256 [2024-11-27 21:47:31.152254] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.256 [2024-11-27 21:47:31.195550] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:08.256 [2024-11-27 21:47:31.195600] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.256 [2024-11-27 21:47:31.195616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.256 [2024-11-27 21:47:31.195622] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.256 "name": "raid_bdev1", 00:15:08.256 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:08.256 "strip_size_kb": 0, 00:15:08.256 "state": "online", 00:15:08.256 "raid_level": "raid1", 00:15:08.256 "superblock": true, 00:15:08.256 "num_base_bdevs": 2, 00:15:08.256 "num_base_bdevs_discovered": 1, 00:15:08.256 "num_base_bdevs_operational": 1, 00:15:08.256 "base_bdevs_list": [ 00:15:08.256 { 00:15:08.256 "name": null, 00:15:08.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.256 "is_configured": false, 00:15:08.256 "data_offset": 0, 00:15:08.256 "data_size": 7936 00:15:08.256 }, 00:15:08.256 { 00:15:08.256 "name": "BaseBdev2", 00:15:08.256 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:08.256 "is_configured": true, 00:15:08.256 "data_offset": 256, 00:15:08.256 "data_size": 7936 00:15:08.256 } 00:15:08.256 ] 00:15:08.256 }' 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.256 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.522 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:08.522 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.522 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:08.522 [2024-11-27 21:47:31.635200] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:08.522 [2024-11-27 21:47:31.635303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.522 [2024-11-27 21:47:31.635341] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:08.522 [2024-11-27 21:47:31.635368] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.522 [2024-11-27 21:47:31.635835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.522 [2024-11-27 21:47:31.635892] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:08.522 [2024-11-27 21:47:31.635989] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:08.522 [2024-11-27 21:47:31.636015] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:08.522 [2024-11-27 21:47:31.636084] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:08.522 [2024-11-27 21:47:31.636170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:08.522 [2024-11-27 21:47:31.640561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:15:08.522 spare 00:15:08.522 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.522 21:47:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:08.522 [2024-11-27 21:47:31.642593] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.901 "name": "raid_bdev1", 00:15:09.901 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:09.901 "strip_size_kb": 0, 00:15:09.901 "state": "online", 00:15:09.901 "raid_level": "raid1", 00:15:09.901 "superblock": true, 00:15:09.901 "num_base_bdevs": 2, 00:15:09.901 "num_base_bdevs_discovered": 2, 00:15:09.901 "num_base_bdevs_operational": 2, 00:15:09.901 "process": { 00:15:09.901 "type": "rebuild", 00:15:09.901 "target": "spare", 00:15:09.901 "progress": { 00:15:09.901 "blocks": 2560, 00:15:09.901 "percent": 32 00:15:09.901 } 00:15:09.901 }, 00:15:09.901 "base_bdevs_list": [ 00:15:09.901 { 00:15:09.901 "name": "spare", 00:15:09.901 "uuid": "b70fe958-fd80-5e38-b292-5b0c8e4161d6", 00:15:09.901 "is_configured": true, 00:15:09.901 "data_offset": 256, 00:15:09.901 "data_size": 7936 00:15:09.901 }, 00:15:09.901 { 00:15:09.901 "name": "BaseBdev2", 00:15:09.901 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:09.901 "is_configured": true, 00:15:09.901 "data_offset": 256, 00:15:09.901 "data_size": 7936 00:15:09.901 } 00:15:09.901 ] 00:15:09.901 }' 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.901 [2024-11-27 21:47:32.803161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.901 [2024-11-27 21:47:32.846746] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:09.901 [2024-11-27 21:47:32.846877] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:09.901 [2024-11-27 21:47:32.846912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:09.901 [2024-11-27 21:47:32.846937] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.901 "name": "raid_bdev1", 00:15:09.901 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:09.901 "strip_size_kb": 0, 00:15:09.901 "state": "online", 00:15:09.901 "raid_level": "raid1", 00:15:09.901 "superblock": true, 00:15:09.901 "num_base_bdevs": 2, 00:15:09.901 "num_base_bdevs_discovered": 1, 00:15:09.901 "num_base_bdevs_operational": 1, 00:15:09.901 "base_bdevs_list": [ 00:15:09.901 { 00:15:09.901 "name": null, 00:15:09.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.901 "is_configured": false, 00:15:09.901 "data_offset": 0, 00:15:09.901 "data_size": 7936 00:15:09.901 }, 00:15:09.901 { 00:15:09.901 "name": "BaseBdev2", 00:15:09.901 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:09.901 "is_configured": true, 00:15:09.901 "data_offset": 256, 00:15:09.901 "data_size": 7936 00:15:09.901 } 00:15:09.901 ] 00:15:09.901 }' 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.901 21:47:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.471 "name": "raid_bdev1", 00:15:10.471 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:10.471 "strip_size_kb": 0, 00:15:10.471 "state": "online", 00:15:10.471 "raid_level": "raid1", 00:15:10.471 "superblock": true, 00:15:10.471 "num_base_bdevs": 2, 00:15:10.471 "num_base_bdevs_discovered": 1, 00:15:10.471 "num_base_bdevs_operational": 1, 00:15:10.471 "base_bdevs_list": [ 00:15:10.471 { 00:15:10.471 "name": null, 00:15:10.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.471 "is_configured": false, 00:15:10.471 "data_offset": 0, 00:15:10.471 "data_size": 7936 00:15:10.471 }, 00:15:10.471 { 00:15:10.471 "name": "BaseBdev2", 00:15:10.471 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:10.471 "is_configured": true, 00:15:10.471 "data_offset": 256, 00:15:10.471 "data_size": 7936 00:15:10.471 } 00:15:10.471 ] 00:15:10.471 }' 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:10.471 [2024-11-27 21:47:33.426362] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:10.471 [2024-11-27 21:47:33.426414] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.471 [2024-11-27 21:47:33.426433] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:10.471 [2024-11-27 21:47:33.426443] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.471 [2024-11-27 21:47:33.426814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.471 [2024-11-27 21:47:33.426837] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:10.471 [2024-11-27 21:47:33.426901] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:10.471 [2024-11-27 21:47:33.426918] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:10.471 [2024-11-27 21:47:33.426929] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:10.471 [2024-11-27 21:47:33.426940] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:10.471 BaseBdev1 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.471 21:47:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.412 "name": "raid_bdev1", 00:15:11.412 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:11.412 "strip_size_kb": 0, 00:15:11.412 "state": "online", 00:15:11.412 "raid_level": "raid1", 00:15:11.412 "superblock": true, 00:15:11.412 "num_base_bdevs": 2, 00:15:11.412 "num_base_bdevs_discovered": 1, 00:15:11.412 "num_base_bdevs_operational": 1, 00:15:11.412 "base_bdevs_list": [ 00:15:11.412 { 00:15:11.412 "name": null, 00:15:11.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.412 "is_configured": false, 00:15:11.412 "data_offset": 0, 00:15:11.412 "data_size": 7936 00:15:11.412 }, 00:15:11.412 { 00:15:11.412 "name": "BaseBdev2", 00:15:11.412 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:11.412 "is_configured": true, 00:15:11.412 "data_offset": 256, 00:15:11.412 "data_size": 7936 00:15:11.412 } 00:15:11.412 ] 00:15:11.412 }' 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.412 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.007 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.007 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.007 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.007 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.007 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.007 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.007 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.007 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.007 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.007 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.007 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.007 "name": "raid_bdev1", 00:15:12.007 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:12.007 "strip_size_kb": 0, 00:15:12.007 "state": "online", 00:15:12.007 "raid_level": "raid1", 00:15:12.007 "superblock": true, 00:15:12.007 "num_base_bdevs": 2, 00:15:12.007 "num_base_bdevs_discovered": 1, 00:15:12.007 "num_base_bdevs_operational": 1, 00:15:12.007 "base_bdevs_list": [ 00:15:12.007 { 00:15:12.007 "name": null, 00:15:12.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.008 "is_configured": false, 00:15:12.008 "data_offset": 0, 00:15:12.008 "data_size": 7936 00:15:12.008 }, 00:15:12.008 { 00:15:12.008 "name": "BaseBdev2", 00:15:12.008 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:12.008 "is_configured": true, 00:15:12.008 "data_offset": 256, 00:15:12.008 "data_size": 7936 00:15:12.008 } 00:15:12.008 ] 00:15:12.008 }' 00:15:12.008 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.008 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.008 21:47:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.008 [2024-11-27 21:47:35.039589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:12.008 [2024-11-27 21:47:35.039784] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:12.008 [2024-11-27 21:47:35.039848] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:12.008 request: 00:15:12.008 { 00:15:12.008 "base_bdev": "BaseBdev1", 00:15:12.008 "raid_bdev": "raid_bdev1", 00:15:12.008 "method": "bdev_raid_add_base_bdev", 00:15:12.008 "req_id": 1 00:15:12.008 } 00:15:12.008 Got JSON-RPC error response 00:15:12.008 response: 00:15:12.008 { 00:15:12.008 "code": -22, 00:15:12.008 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:12.008 } 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:12.008 21:47:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:12.964 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.223 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.223 "name": "raid_bdev1", 00:15:13.223 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:13.223 "strip_size_kb": 0, 00:15:13.223 "state": "online", 00:15:13.223 "raid_level": "raid1", 00:15:13.223 "superblock": true, 00:15:13.223 "num_base_bdevs": 2, 00:15:13.223 "num_base_bdevs_discovered": 1, 00:15:13.223 "num_base_bdevs_operational": 1, 00:15:13.223 "base_bdevs_list": [ 00:15:13.223 { 00:15:13.223 "name": null, 00:15:13.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.223 "is_configured": false, 00:15:13.223 "data_offset": 0, 00:15:13.223 "data_size": 7936 00:15:13.223 }, 00:15:13.223 { 00:15:13.223 "name": "BaseBdev2", 00:15:13.223 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:13.223 "is_configured": true, 00:15:13.223 "data_offset": 256, 00:15:13.223 "data_size": 7936 00:15:13.223 } 00:15:13.223 ] 00:15:13.223 }' 00:15:13.223 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.223 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.482 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.482 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.482 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.482 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.482 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.482 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.482 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.482 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.482 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:13.482 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.482 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.482 "name": "raid_bdev1", 00:15:13.482 "uuid": "c5af1e7b-4fc8-4b75-8ab8-c4682d307b44", 00:15:13.482 "strip_size_kb": 0, 00:15:13.482 "state": "online", 00:15:13.482 "raid_level": "raid1", 00:15:13.482 "superblock": true, 00:15:13.482 "num_base_bdevs": 2, 00:15:13.482 "num_base_bdevs_discovered": 1, 00:15:13.483 "num_base_bdevs_operational": 1, 00:15:13.483 "base_bdevs_list": [ 00:15:13.483 { 00:15:13.483 "name": null, 00:15:13.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.483 "is_configured": false, 00:15:13.483 "data_offset": 0, 00:15:13.483 "data_size": 7936 00:15:13.483 }, 00:15:13.483 { 00:15:13.483 "name": "BaseBdev2", 00:15:13.483 "uuid": "47f6d1bd-8e37-563c-90fc-4daacf5b9df5", 00:15:13.483 "is_configured": true, 00:15:13.483 "data_offset": 256, 00:15:13.483 "data_size": 7936 00:15:13.483 } 00:15:13.483 ] 00:15:13.483 }' 00:15:13.483 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.742 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.742 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.742 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.742 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 96518 00:15:13.742 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 96518 ']' 00:15:13.742 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 96518 00:15:13.742 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:15:13.742 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:13.742 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96518 00:15:13.742 killing process with pid 96518 00:15:13.742 Received shutdown signal, test time was about 60.000000 seconds 00:15:13.742 00:15:13.742 Latency(us) 00:15:13.742 [2024-11-27T21:47:36.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.742 [2024-11-27T21:47:36.863Z] =================================================================================================================== 00:15:13.742 [2024-11-27T21:47:36.863Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:13.742 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:13.742 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:13.742 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96518' 00:15:13.742 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 96518 00:15:13.742 [2024-11-27 21:47:36.703996] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:13.742 [2024-11-27 21:47:36.704119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.742 [2024-11-27 21:47:36.704179] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.742 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 96518 00:15:13.742 [2024-11-27 21:47:36.704189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:13.742 [2024-11-27 21:47:36.734976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:14.002 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:15:14.002 00:15:14.002 real 0m18.344s 00:15:14.002 user 0m24.311s 00:15:14.002 sys 0m2.653s 00:15:14.002 ************************************ 00:15:14.002 END TEST raid_rebuild_test_sb_4k 00:15:14.002 ************************************ 00:15:14.002 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.002 21:47:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:15:14.002 21:47:36 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:15:14.002 21:47:36 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:15:14.002 21:47:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:14.002 21:47:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.002 21:47:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:14.002 ************************************ 00:15:14.002 START TEST raid_state_function_test_sb_md_separate 00:15:14.002 ************************************ 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:14.002 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:14.003 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:14.003 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:14.003 Process raid pid: 97197 00:15:14.003 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=97197 00:15:14.003 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:14.003 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 97197' 00:15:14.003 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 97197 00:15:14.003 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 97197 ']' 00:15:14.003 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.003 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.003 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.003 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.003 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:14.003 [2024-11-27 21:47:37.105875] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:14.003 [2024-11-27 21:47:37.106015] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.262 [2024-11-27 21:47:37.264446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.262 [2024-11-27 21:47:37.290436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.262 [2024-11-27 21:47:37.333338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.262 [2024-11-27 21:47:37.333456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.830 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.830 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:15:14.830 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:14.830 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.830 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:14.830 [2024-11-27 21:47:37.936453] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:14.831 [2024-11-27 21:47:37.936586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:14.831 [2024-11-27 21:47:37.936600] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.831 [2024-11-27 21:47:37.936610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.831 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.090 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.090 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.090 "name": "Existed_Raid", 00:15:15.090 "uuid": "6f073c4b-d3bf-45de-a120-c20eec3161a4", 00:15:15.090 "strip_size_kb": 0, 00:15:15.090 "state": "configuring", 00:15:15.090 "raid_level": "raid1", 00:15:15.090 "superblock": true, 00:15:15.090 "num_base_bdevs": 2, 00:15:15.090 "num_base_bdevs_discovered": 0, 00:15:15.090 "num_base_bdevs_operational": 2, 00:15:15.090 "base_bdevs_list": [ 00:15:15.090 { 00:15:15.090 "name": "BaseBdev1", 00:15:15.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.090 "is_configured": false, 00:15:15.090 "data_offset": 0, 00:15:15.090 "data_size": 0 00:15:15.090 }, 00:15:15.090 { 00:15:15.090 "name": "BaseBdev2", 00:15:15.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.090 "is_configured": false, 00:15:15.090 "data_offset": 0, 00:15:15.090 "data_size": 0 00:15:15.090 } 00:15:15.090 ] 00:15:15.090 }' 00:15:15.090 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.090 21:47:37 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.349 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:15.349 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.349 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.349 [2024-11-27 21:47:38.367731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:15.349 [2024-11-27 21:47:38.367829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.350 [2024-11-27 21:47:38.379714] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:15.350 [2024-11-27 21:47:38.379790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:15.350 [2024-11-27 21:47:38.379823] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.350 [2024-11-27 21:47:38.379856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.350 [2024-11-27 21:47:38.401116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.350 BaseBdev1 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.350 [ 00:15:15.350 { 00:15:15.350 "name": "BaseBdev1", 00:15:15.350 "aliases": [ 00:15:15.350 "a20b9955-db35-4822-9024-c9fb76db24b5" 00:15:15.350 ], 00:15:15.350 "product_name": "Malloc disk", 00:15:15.350 "block_size": 4096, 00:15:15.350 "num_blocks": 8192, 00:15:15.350 "uuid": "a20b9955-db35-4822-9024-c9fb76db24b5", 00:15:15.350 "md_size": 32, 00:15:15.350 "md_interleave": false, 00:15:15.350 "dif_type": 0, 00:15:15.350 "assigned_rate_limits": { 00:15:15.350 "rw_ios_per_sec": 0, 00:15:15.350 "rw_mbytes_per_sec": 0, 00:15:15.350 "r_mbytes_per_sec": 0, 00:15:15.350 "w_mbytes_per_sec": 0 00:15:15.350 }, 00:15:15.350 "claimed": true, 00:15:15.350 "claim_type": "exclusive_write", 00:15:15.350 "zoned": false, 00:15:15.350 "supported_io_types": { 00:15:15.350 "read": true, 00:15:15.350 "write": true, 00:15:15.350 "unmap": true, 00:15:15.350 "flush": true, 00:15:15.350 "reset": true, 00:15:15.350 "nvme_admin": false, 00:15:15.350 "nvme_io": false, 00:15:15.350 "nvme_io_md": false, 00:15:15.350 "write_zeroes": true, 00:15:15.350 "zcopy": true, 00:15:15.350 "get_zone_info": false, 00:15:15.350 "zone_management": false, 00:15:15.350 "zone_append": false, 00:15:15.350 "compare": false, 00:15:15.350 "compare_and_write": false, 00:15:15.350 "abort": true, 00:15:15.350 "seek_hole": false, 00:15:15.350 "seek_data": false, 00:15:15.350 "copy": true, 00:15:15.350 "nvme_iov_md": false 00:15:15.350 }, 00:15:15.350 "memory_domains": [ 00:15:15.350 { 00:15:15.350 "dma_device_id": "system", 00:15:15.350 "dma_device_type": 1 00:15:15.350 }, 00:15:15.350 { 00:15:15.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.350 "dma_device_type": 2 00:15:15.350 } 00:15:15.350 ], 00:15:15.350 "driver_specific": {} 00:15:15.350 } 00:15:15.350 ] 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.350 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.609 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.609 "name": "Existed_Raid", 00:15:15.609 "uuid": "2f031580-57bd-4432-80ff-9f6fae45faea", 00:15:15.609 "strip_size_kb": 0, 00:15:15.609 "state": "configuring", 00:15:15.609 "raid_level": "raid1", 00:15:15.609 "superblock": true, 00:15:15.609 "num_base_bdevs": 2, 00:15:15.609 "num_base_bdevs_discovered": 1, 00:15:15.609 "num_base_bdevs_operational": 2, 00:15:15.609 "base_bdevs_list": [ 00:15:15.609 { 00:15:15.609 "name": "BaseBdev1", 00:15:15.609 "uuid": "a20b9955-db35-4822-9024-c9fb76db24b5", 00:15:15.609 "is_configured": true, 00:15:15.609 "data_offset": 256, 00:15:15.609 "data_size": 7936 00:15:15.609 }, 00:15:15.609 { 00:15:15.609 "name": "BaseBdev2", 00:15:15.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.609 "is_configured": false, 00:15:15.609 "data_offset": 0, 00:15:15.609 "data_size": 0 00:15:15.609 } 00:15:15.609 ] 00:15:15.609 }' 00:15:15.609 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.609 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.869 [2024-11-27 21:47:38.860418] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:15.869 [2024-11-27 21:47:38.860508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.869 [2024-11-27 21:47:38.872431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.869 [2024-11-27 21:47:38.874250] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.869 [2024-11-27 21:47:38.874317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.869 "name": "Existed_Raid", 00:15:15.869 "uuid": "739cf225-04c7-41bf-a562-febf118bbba5", 00:15:15.869 "strip_size_kb": 0, 00:15:15.869 "state": "configuring", 00:15:15.869 "raid_level": "raid1", 00:15:15.869 "superblock": true, 00:15:15.869 "num_base_bdevs": 2, 00:15:15.869 "num_base_bdevs_discovered": 1, 00:15:15.869 "num_base_bdevs_operational": 2, 00:15:15.869 "base_bdevs_list": [ 00:15:15.869 { 00:15:15.869 "name": "BaseBdev1", 00:15:15.869 "uuid": "a20b9955-db35-4822-9024-c9fb76db24b5", 00:15:15.869 "is_configured": true, 00:15:15.869 "data_offset": 256, 00:15:15.869 "data_size": 7936 00:15:15.869 }, 00:15:15.869 { 00:15:15.869 "name": "BaseBdev2", 00:15:15.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.869 "is_configured": false, 00:15:15.869 "data_offset": 0, 00:15:15.869 "data_size": 0 00:15:15.869 } 00:15:15.869 ] 00:15:15.869 }' 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.869 21:47:38 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.439 [2024-11-27 21:47:39.267498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.439 [2024-11-27 21:47:39.267778] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:16.439 [2024-11-27 21:47:39.267856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:16.439 [2024-11-27 21:47:39.268010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:16.439 [2024-11-27 21:47:39.268175] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:16.439 BaseBdev2 00:15:16.439 [2024-11-27 21:47:39.268228] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:16.439 [2024-11-27 21:47:39.268359] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.439 [ 00:15:16.439 { 00:15:16.439 "name": "BaseBdev2", 00:15:16.439 "aliases": [ 00:15:16.439 "80871e69-ae70-4311-9472-4863bc7d94ef" 00:15:16.439 ], 00:15:16.439 "product_name": "Malloc disk", 00:15:16.439 "block_size": 4096, 00:15:16.439 "num_blocks": 8192, 00:15:16.439 "uuid": "80871e69-ae70-4311-9472-4863bc7d94ef", 00:15:16.439 "md_size": 32, 00:15:16.439 "md_interleave": false, 00:15:16.439 "dif_type": 0, 00:15:16.439 "assigned_rate_limits": { 00:15:16.439 "rw_ios_per_sec": 0, 00:15:16.439 "rw_mbytes_per_sec": 0, 00:15:16.439 "r_mbytes_per_sec": 0, 00:15:16.439 "w_mbytes_per_sec": 0 00:15:16.439 }, 00:15:16.439 "claimed": true, 00:15:16.439 "claim_type": "exclusive_write", 00:15:16.439 "zoned": false, 00:15:16.439 "supported_io_types": { 00:15:16.439 "read": true, 00:15:16.439 "write": true, 00:15:16.439 "unmap": true, 00:15:16.439 "flush": true, 00:15:16.439 "reset": true, 00:15:16.439 "nvme_admin": false, 00:15:16.439 "nvme_io": false, 00:15:16.439 "nvme_io_md": false, 00:15:16.439 "write_zeroes": true, 00:15:16.439 "zcopy": true, 00:15:16.439 "get_zone_info": false, 00:15:16.439 "zone_management": false, 00:15:16.439 "zone_append": false, 00:15:16.439 "compare": false, 00:15:16.439 "compare_and_write": false, 00:15:16.439 "abort": true, 00:15:16.439 "seek_hole": false, 00:15:16.439 "seek_data": false, 00:15:16.439 "copy": true, 00:15:16.439 "nvme_iov_md": false 00:15:16.439 }, 00:15:16.439 "memory_domains": [ 00:15:16.439 { 00:15:16.439 "dma_device_id": "system", 00:15:16.439 "dma_device_type": 1 00:15:16.439 }, 00:15:16.439 { 00:15:16.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.439 "dma_device_type": 2 00:15:16.439 } 00:15:16.439 ], 00:15:16.439 "driver_specific": {} 00:15:16.439 } 00:15:16.439 ] 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.439 "name": "Existed_Raid", 00:15:16.439 "uuid": "739cf225-04c7-41bf-a562-febf118bbba5", 00:15:16.439 "strip_size_kb": 0, 00:15:16.439 "state": "online", 00:15:16.439 "raid_level": "raid1", 00:15:16.439 "superblock": true, 00:15:16.439 "num_base_bdevs": 2, 00:15:16.439 "num_base_bdevs_discovered": 2, 00:15:16.439 "num_base_bdevs_operational": 2, 00:15:16.439 "base_bdevs_list": [ 00:15:16.439 { 00:15:16.439 "name": "BaseBdev1", 00:15:16.439 "uuid": "a20b9955-db35-4822-9024-c9fb76db24b5", 00:15:16.439 "is_configured": true, 00:15:16.439 "data_offset": 256, 00:15:16.439 "data_size": 7936 00:15:16.439 }, 00:15:16.439 { 00:15:16.439 "name": "BaseBdev2", 00:15:16.439 "uuid": "80871e69-ae70-4311-9472-4863bc7d94ef", 00:15:16.439 "is_configured": true, 00:15:16.439 "data_offset": 256, 00:15:16.439 "data_size": 7936 00:15:16.439 } 00:15:16.439 ] 00:15:16.439 }' 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.439 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.699 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:16.699 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:16.699 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:16.699 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:16.699 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:16.699 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:16.699 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:16.699 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:16.699 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.699 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.699 [2024-11-27 21:47:39.810938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.959 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.959 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:16.959 "name": "Existed_Raid", 00:15:16.959 "aliases": [ 00:15:16.959 "739cf225-04c7-41bf-a562-febf118bbba5" 00:15:16.959 ], 00:15:16.959 "product_name": "Raid Volume", 00:15:16.959 "block_size": 4096, 00:15:16.959 "num_blocks": 7936, 00:15:16.959 "uuid": "739cf225-04c7-41bf-a562-febf118bbba5", 00:15:16.959 "md_size": 32, 00:15:16.959 "md_interleave": false, 00:15:16.959 "dif_type": 0, 00:15:16.959 "assigned_rate_limits": { 00:15:16.959 "rw_ios_per_sec": 0, 00:15:16.959 "rw_mbytes_per_sec": 0, 00:15:16.959 "r_mbytes_per_sec": 0, 00:15:16.959 "w_mbytes_per_sec": 0 00:15:16.959 }, 00:15:16.959 "claimed": false, 00:15:16.959 "zoned": false, 00:15:16.959 "supported_io_types": { 00:15:16.959 "read": true, 00:15:16.959 "write": true, 00:15:16.959 "unmap": false, 00:15:16.959 "flush": false, 00:15:16.959 "reset": true, 00:15:16.959 "nvme_admin": false, 00:15:16.959 "nvme_io": false, 00:15:16.959 "nvme_io_md": false, 00:15:16.959 "write_zeroes": true, 00:15:16.959 "zcopy": false, 00:15:16.959 "get_zone_info": false, 00:15:16.959 "zone_management": false, 00:15:16.959 "zone_append": false, 00:15:16.959 "compare": false, 00:15:16.959 "compare_and_write": false, 00:15:16.959 "abort": false, 00:15:16.959 "seek_hole": false, 00:15:16.959 "seek_data": false, 00:15:16.959 "copy": false, 00:15:16.959 "nvme_iov_md": false 00:15:16.959 }, 00:15:16.959 "memory_domains": [ 00:15:16.959 { 00:15:16.959 "dma_device_id": "system", 00:15:16.959 "dma_device_type": 1 00:15:16.959 }, 00:15:16.959 { 00:15:16.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.959 "dma_device_type": 2 00:15:16.959 }, 00:15:16.959 { 00:15:16.959 "dma_device_id": "system", 00:15:16.959 "dma_device_type": 1 00:15:16.959 }, 00:15:16.959 { 00:15:16.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.959 "dma_device_type": 2 00:15:16.959 } 00:15:16.959 ], 00:15:16.959 "driver_specific": { 00:15:16.959 "raid": { 00:15:16.959 "uuid": "739cf225-04c7-41bf-a562-febf118bbba5", 00:15:16.959 "strip_size_kb": 0, 00:15:16.959 "state": "online", 00:15:16.959 "raid_level": "raid1", 00:15:16.959 "superblock": true, 00:15:16.959 "num_base_bdevs": 2, 00:15:16.959 "num_base_bdevs_discovered": 2, 00:15:16.959 "num_base_bdevs_operational": 2, 00:15:16.959 "base_bdevs_list": [ 00:15:16.959 { 00:15:16.959 "name": "BaseBdev1", 00:15:16.959 "uuid": "a20b9955-db35-4822-9024-c9fb76db24b5", 00:15:16.959 "is_configured": true, 00:15:16.959 "data_offset": 256, 00:15:16.959 "data_size": 7936 00:15:16.959 }, 00:15:16.959 { 00:15:16.959 "name": "BaseBdev2", 00:15:16.959 "uuid": "80871e69-ae70-4311-9472-4863bc7d94ef", 00:15:16.959 "is_configured": true, 00:15:16.959 "data_offset": 256, 00:15:16.959 "data_size": 7936 00:15:16.959 } 00:15:16.959 ] 00:15:16.959 } 00:15:16.959 } 00:15:16.959 }' 00:15:16.959 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:16.959 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:16.959 BaseBdev2' 00:15:16.959 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.959 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:16.959 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.959 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:16.959 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.959 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.959 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.959 21:47:39 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.959 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:16.959 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:16.959 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.959 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.959 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:16.959 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.959 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.959 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.959 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:16.960 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:16.960 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:16.960 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.960 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:16.960 [2024-11-27 21:47:40.062299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:16.960 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.960 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:16.960 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:16.960 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:16.960 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:16.960 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:16.960 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:16.960 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.960 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.219 "name": "Existed_Raid", 00:15:17.219 "uuid": "739cf225-04c7-41bf-a562-febf118bbba5", 00:15:17.219 "strip_size_kb": 0, 00:15:17.219 "state": "online", 00:15:17.219 "raid_level": "raid1", 00:15:17.219 "superblock": true, 00:15:17.219 "num_base_bdevs": 2, 00:15:17.219 "num_base_bdevs_discovered": 1, 00:15:17.219 "num_base_bdevs_operational": 1, 00:15:17.219 "base_bdevs_list": [ 00:15:17.219 { 00:15:17.219 "name": null, 00:15:17.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.219 "is_configured": false, 00:15:17.219 "data_offset": 0, 00:15:17.219 "data_size": 7936 00:15:17.219 }, 00:15:17.219 { 00:15:17.219 "name": "BaseBdev2", 00:15:17.219 "uuid": "80871e69-ae70-4311-9472-4863bc7d94ef", 00:15:17.219 "is_configured": true, 00:15:17.219 "data_offset": 256, 00:15:17.219 "data_size": 7936 00:15:17.219 } 00:15:17.219 ] 00:15:17.219 }' 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.219 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.479 [2024-11-27 21:47:40.549669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:17.479 [2024-11-27 21:47:40.549763] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.479 [2024-11-27 21:47:40.562185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.479 [2024-11-27 21:47:40.562296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.479 [2024-11-27 21:47:40.562343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:17.479 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.740 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:17.740 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:17.740 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:17.740 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 97197 00:15:17.740 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 97197 ']' 00:15:17.740 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 97197 00:15:17.740 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:15:17.740 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:17.740 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97197 00:15:17.740 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:17.740 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:17.740 killing process with pid 97197 00:15:17.740 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97197' 00:15:17.740 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 97197 00:15:17.740 [2024-11-27 21:47:40.662360] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.740 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 97197 00:15:17.740 [2024-11-27 21:47:40.663310] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:18.001 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:15:18.001 00:15:18.001 real 0m3.870s 00:15:18.001 user 0m6.095s 00:15:18.001 sys 0m0.834s 00:15:18.001 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:18.001 ************************************ 00:15:18.001 END TEST raid_state_function_test_sb_md_separate 00:15:18.001 ************************************ 00:15:18.001 21:47:40 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:18.001 21:47:40 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:15:18.001 21:47:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:18.001 21:47:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:18.001 21:47:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:18.001 ************************************ 00:15:18.001 START TEST raid_superblock_test_md_separate 00:15:18.001 ************************************ 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=97433 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 97433 00:15:18.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 97433 ']' 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:18.001 21:47:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:18.001 [2024-11-27 21:47:41.049705] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:18.001 [2024-11-27 21:47:41.049837] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97433 ] 00:15:18.260 [2024-11-27 21:47:41.200159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.260 [2024-11-27 21:47:41.225692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.260 [2024-11-27 21:47:41.268399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.260 [2024-11-27 21:47:41.268438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:18.827 malloc1 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:18.827 [2024-11-27 21:47:41.880640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:18.827 [2024-11-27 21:47:41.880790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.827 [2024-11-27 21:47:41.880841] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:18.827 [2024-11-27 21:47:41.880871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.827 [2024-11-27 21:47:41.882658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.827 [2024-11-27 21:47:41.882729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:18.827 pt1 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:18.827 malloc2 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:18.827 [2024-11-27 21:47:41.909690] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:18.827 [2024-11-27 21:47:41.909791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.827 [2024-11-27 21:47:41.909831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:18.827 [2024-11-27 21:47:41.909859] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.827 [2024-11-27 21:47:41.911615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.827 [2024-11-27 21:47:41.911686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:18.827 pt2 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:18.827 [2024-11-27 21:47:41.921704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:18.827 [2024-11-27 21:47:41.923414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:18.827 [2024-11-27 21:47:41.923558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:18.827 [2024-11-27 21:47:41.923573] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:18.827 [2024-11-27 21:47:41.923648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:18.827 [2024-11-27 21:47:41.923751] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:18.827 [2024-11-27 21:47:41.923760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:18.827 [2024-11-27 21:47:41.923843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.827 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.084 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.084 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.084 "name": "raid_bdev1", 00:15:19.084 "uuid": "5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd", 00:15:19.084 "strip_size_kb": 0, 00:15:19.084 "state": "online", 00:15:19.084 "raid_level": "raid1", 00:15:19.084 "superblock": true, 00:15:19.084 "num_base_bdevs": 2, 00:15:19.084 "num_base_bdevs_discovered": 2, 00:15:19.084 "num_base_bdevs_operational": 2, 00:15:19.084 "base_bdevs_list": [ 00:15:19.084 { 00:15:19.084 "name": "pt1", 00:15:19.084 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:19.084 "is_configured": true, 00:15:19.084 "data_offset": 256, 00:15:19.084 "data_size": 7936 00:15:19.084 }, 00:15:19.084 { 00:15:19.084 "name": "pt2", 00:15:19.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.084 "is_configured": true, 00:15:19.084 "data_offset": 256, 00:15:19.084 "data_size": 7936 00:15:19.084 } 00:15:19.084 ] 00:15:19.084 }' 00:15:19.084 21:47:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.084 21:47:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.342 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:19.342 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:19.342 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:19.342 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:19.342 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:19.342 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:19.342 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:19.342 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.342 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.342 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.342 [2024-11-27 21:47:42.421131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.342 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.342 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:19.342 "name": "raid_bdev1", 00:15:19.342 "aliases": [ 00:15:19.342 "5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd" 00:15:19.342 ], 00:15:19.342 "product_name": "Raid Volume", 00:15:19.342 "block_size": 4096, 00:15:19.342 "num_blocks": 7936, 00:15:19.342 "uuid": "5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd", 00:15:19.342 "md_size": 32, 00:15:19.342 "md_interleave": false, 00:15:19.342 "dif_type": 0, 00:15:19.342 "assigned_rate_limits": { 00:15:19.342 "rw_ios_per_sec": 0, 00:15:19.342 "rw_mbytes_per_sec": 0, 00:15:19.342 "r_mbytes_per_sec": 0, 00:15:19.342 "w_mbytes_per_sec": 0 00:15:19.342 }, 00:15:19.342 "claimed": false, 00:15:19.342 "zoned": false, 00:15:19.342 "supported_io_types": { 00:15:19.342 "read": true, 00:15:19.342 "write": true, 00:15:19.342 "unmap": false, 00:15:19.342 "flush": false, 00:15:19.342 "reset": true, 00:15:19.342 "nvme_admin": false, 00:15:19.342 "nvme_io": false, 00:15:19.342 "nvme_io_md": false, 00:15:19.342 "write_zeroes": true, 00:15:19.342 "zcopy": false, 00:15:19.342 "get_zone_info": false, 00:15:19.342 "zone_management": false, 00:15:19.342 "zone_append": false, 00:15:19.342 "compare": false, 00:15:19.342 "compare_and_write": false, 00:15:19.342 "abort": false, 00:15:19.342 "seek_hole": false, 00:15:19.342 "seek_data": false, 00:15:19.342 "copy": false, 00:15:19.342 "nvme_iov_md": false 00:15:19.342 }, 00:15:19.342 "memory_domains": [ 00:15:19.342 { 00:15:19.342 "dma_device_id": "system", 00:15:19.342 "dma_device_type": 1 00:15:19.342 }, 00:15:19.342 { 00:15:19.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.342 "dma_device_type": 2 00:15:19.342 }, 00:15:19.342 { 00:15:19.342 "dma_device_id": "system", 00:15:19.342 "dma_device_type": 1 00:15:19.342 }, 00:15:19.342 { 00:15:19.342 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.342 "dma_device_type": 2 00:15:19.342 } 00:15:19.342 ], 00:15:19.342 "driver_specific": { 00:15:19.342 "raid": { 00:15:19.342 "uuid": "5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd", 00:15:19.342 "strip_size_kb": 0, 00:15:19.342 "state": "online", 00:15:19.342 "raid_level": "raid1", 00:15:19.342 "superblock": true, 00:15:19.342 "num_base_bdevs": 2, 00:15:19.342 "num_base_bdevs_discovered": 2, 00:15:19.342 "num_base_bdevs_operational": 2, 00:15:19.342 "base_bdevs_list": [ 00:15:19.342 { 00:15:19.342 "name": "pt1", 00:15:19.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:19.342 "is_configured": true, 00:15:19.342 "data_offset": 256, 00:15:19.342 "data_size": 7936 00:15:19.342 }, 00:15:19.342 { 00:15:19.342 "name": "pt2", 00:15:19.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.342 "is_configured": true, 00:15:19.342 "data_offset": 256, 00:15:19.342 "data_size": 7936 00:15:19.342 } 00:15:19.342 ] 00:15:19.342 } 00:15:19.342 } 00:15:19.342 }' 00:15:19.342 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:19.600 pt2' 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:19.600 [2024-11-27 21:47:42.656643] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd ']' 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.600 [2024-11-27 21:47:42.700348] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.600 [2024-11-27 21:47:42.700420] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:19.600 [2024-11-27 21:47:42.700498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.600 [2024-11-27 21:47:42.700551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.600 [2024-11-27 21:47:42.700560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.600 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.859 [2024-11-27 21:47:42.840200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:19.859 [2024-11-27 21:47:42.842015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:19.859 [2024-11-27 21:47:42.842125] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:19.859 [2024-11-27 21:47:42.842174] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:19.859 [2024-11-27 21:47:42.842191] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.859 [2024-11-27 21:47:42.842199] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:19.859 request: 00:15:19.859 { 00:15:19.859 "name": "raid_bdev1", 00:15:19.859 "raid_level": "raid1", 00:15:19.859 "base_bdevs": [ 00:15:19.859 "malloc1", 00:15:19.859 "malloc2" 00:15:19.859 ], 00:15:19.859 "superblock": false, 00:15:19.859 "method": "bdev_raid_create", 00:15:19.859 "req_id": 1 00:15:19.859 } 00:15:19.859 Got JSON-RPC error response 00:15:19.859 response: 00:15:19.859 { 00:15:19.859 "code": -17, 00:15:19.859 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:19.859 } 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.859 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.860 [2024-11-27 21:47:42.904089] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:19.860 [2024-11-27 21:47:42.904178] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.860 [2024-11-27 21:47:42.904209] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:19.860 [2024-11-27 21:47:42.904240] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.860 [2024-11-27 21:47:42.906090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.860 [2024-11-27 21:47:42.906153] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:19.860 [2024-11-27 21:47:42.906213] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:19.860 [2024-11-27 21:47:42.906267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:19.860 pt1 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.860 "name": "raid_bdev1", 00:15:19.860 "uuid": "5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd", 00:15:19.860 "strip_size_kb": 0, 00:15:19.860 "state": "configuring", 00:15:19.860 "raid_level": "raid1", 00:15:19.860 "superblock": true, 00:15:19.860 "num_base_bdevs": 2, 00:15:19.860 "num_base_bdevs_discovered": 1, 00:15:19.860 "num_base_bdevs_operational": 2, 00:15:19.860 "base_bdevs_list": [ 00:15:19.860 { 00:15:19.860 "name": "pt1", 00:15:19.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:19.860 "is_configured": true, 00:15:19.860 "data_offset": 256, 00:15:19.860 "data_size": 7936 00:15:19.860 }, 00:15:19.860 { 00:15:19.860 "name": null, 00:15:19.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.860 "is_configured": false, 00:15:19.860 "data_offset": 256, 00:15:19.860 "data_size": 7936 00:15:19.860 } 00:15:19.860 ] 00:15:19.860 }' 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.860 21:47:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.426 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:20.426 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:20.426 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:20.426 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:20.426 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.426 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.426 [2024-11-27 21:47:43.359294] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:20.426 [2024-11-27 21:47:43.359386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.426 [2024-11-27 21:47:43.359408] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:20.426 [2024-11-27 21:47:43.359416] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.426 [2024-11-27 21:47:43.359570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.426 [2024-11-27 21:47:43.359585] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:20.426 [2024-11-27 21:47:43.359627] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:20.426 [2024-11-27 21:47:43.359652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:20.426 [2024-11-27 21:47:43.359736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:20.426 [2024-11-27 21:47:43.359743] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:20.426 [2024-11-27 21:47:43.359821] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:20.426 [2024-11-27 21:47:43.359900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:20.426 [2024-11-27 21:47:43.359912] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:20.426 [2024-11-27 21:47:43.359971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.426 pt2 00:15:20.426 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.427 "name": "raid_bdev1", 00:15:20.427 "uuid": "5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd", 00:15:20.427 "strip_size_kb": 0, 00:15:20.427 "state": "online", 00:15:20.427 "raid_level": "raid1", 00:15:20.427 "superblock": true, 00:15:20.427 "num_base_bdevs": 2, 00:15:20.427 "num_base_bdevs_discovered": 2, 00:15:20.427 "num_base_bdevs_operational": 2, 00:15:20.427 "base_bdevs_list": [ 00:15:20.427 { 00:15:20.427 "name": "pt1", 00:15:20.427 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:20.427 "is_configured": true, 00:15:20.427 "data_offset": 256, 00:15:20.427 "data_size": 7936 00:15:20.427 }, 00:15:20.427 { 00:15:20.427 "name": "pt2", 00:15:20.427 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.427 "is_configured": true, 00:15:20.427 "data_offset": 256, 00:15:20.427 "data_size": 7936 00:15:20.427 } 00:15:20.427 ] 00:15:20.427 }' 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.427 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.685 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:20.685 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:20.685 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:20.685 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:20.685 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:15:20.685 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:20.685 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:20.685 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.685 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.685 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:20.685 [2024-11-27 21:47:43.782919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.685 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.944 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:20.944 "name": "raid_bdev1", 00:15:20.944 "aliases": [ 00:15:20.944 "5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd" 00:15:20.944 ], 00:15:20.944 "product_name": "Raid Volume", 00:15:20.944 "block_size": 4096, 00:15:20.944 "num_blocks": 7936, 00:15:20.944 "uuid": "5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd", 00:15:20.944 "md_size": 32, 00:15:20.944 "md_interleave": false, 00:15:20.944 "dif_type": 0, 00:15:20.944 "assigned_rate_limits": { 00:15:20.944 "rw_ios_per_sec": 0, 00:15:20.944 "rw_mbytes_per_sec": 0, 00:15:20.944 "r_mbytes_per_sec": 0, 00:15:20.944 "w_mbytes_per_sec": 0 00:15:20.944 }, 00:15:20.944 "claimed": false, 00:15:20.944 "zoned": false, 00:15:20.944 "supported_io_types": { 00:15:20.944 "read": true, 00:15:20.944 "write": true, 00:15:20.944 "unmap": false, 00:15:20.944 "flush": false, 00:15:20.944 "reset": true, 00:15:20.944 "nvme_admin": false, 00:15:20.944 "nvme_io": false, 00:15:20.944 "nvme_io_md": false, 00:15:20.944 "write_zeroes": true, 00:15:20.944 "zcopy": false, 00:15:20.944 "get_zone_info": false, 00:15:20.944 "zone_management": false, 00:15:20.944 "zone_append": false, 00:15:20.944 "compare": false, 00:15:20.944 "compare_and_write": false, 00:15:20.944 "abort": false, 00:15:20.944 "seek_hole": false, 00:15:20.944 "seek_data": false, 00:15:20.944 "copy": false, 00:15:20.944 "nvme_iov_md": false 00:15:20.945 }, 00:15:20.945 "memory_domains": [ 00:15:20.945 { 00:15:20.945 "dma_device_id": "system", 00:15:20.945 "dma_device_type": 1 00:15:20.945 }, 00:15:20.945 { 00:15:20.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.945 "dma_device_type": 2 00:15:20.945 }, 00:15:20.945 { 00:15:20.945 "dma_device_id": "system", 00:15:20.945 "dma_device_type": 1 00:15:20.945 }, 00:15:20.945 { 00:15:20.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:20.945 "dma_device_type": 2 00:15:20.945 } 00:15:20.945 ], 00:15:20.945 "driver_specific": { 00:15:20.945 "raid": { 00:15:20.945 "uuid": "5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd", 00:15:20.945 "strip_size_kb": 0, 00:15:20.945 "state": "online", 00:15:20.945 "raid_level": "raid1", 00:15:20.945 "superblock": true, 00:15:20.945 "num_base_bdevs": 2, 00:15:20.945 "num_base_bdevs_discovered": 2, 00:15:20.945 "num_base_bdevs_operational": 2, 00:15:20.945 "base_bdevs_list": [ 00:15:20.945 { 00:15:20.945 "name": "pt1", 00:15:20.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:20.945 "is_configured": true, 00:15:20.945 "data_offset": 256, 00:15:20.945 "data_size": 7936 00:15:20.945 }, 00:15:20.945 { 00:15:20.945 "name": "pt2", 00:15:20.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.945 "is_configured": true, 00:15:20.945 "data_offset": 256, 00:15:20.945 "data_size": 7936 00:15:20.945 } 00:15:20.945 ] 00:15:20.945 } 00:15:20.945 } 00:15:20.945 }' 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:20.945 pt2' 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:20.945 21:47:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.945 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:15:20.945 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:15:20.945 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:20.945 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.945 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:20.945 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:20.945 [2024-11-27 21:47:44.030452] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.945 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd '!=' 5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd ']' 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.204 [2024-11-27 21:47:44.074180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.204 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.204 "name": "raid_bdev1", 00:15:21.204 "uuid": "5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd", 00:15:21.204 "strip_size_kb": 0, 00:15:21.204 "state": "online", 00:15:21.204 "raid_level": "raid1", 00:15:21.205 "superblock": true, 00:15:21.205 "num_base_bdevs": 2, 00:15:21.205 "num_base_bdevs_discovered": 1, 00:15:21.205 "num_base_bdevs_operational": 1, 00:15:21.205 "base_bdevs_list": [ 00:15:21.205 { 00:15:21.205 "name": null, 00:15:21.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.205 "is_configured": false, 00:15:21.205 "data_offset": 0, 00:15:21.205 "data_size": 7936 00:15:21.205 }, 00:15:21.205 { 00:15:21.205 "name": "pt2", 00:15:21.205 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.205 "is_configured": true, 00:15:21.205 "data_offset": 256, 00:15:21.205 "data_size": 7936 00:15:21.205 } 00:15:21.205 ] 00:15:21.205 }' 00:15:21.205 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.205 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.464 [2024-11-27 21:47:44.521386] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:21.464 [2024-11-27 21:47:44.521471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.464 [2024-11-27 21:47:44.521549] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.464 [2024-11-27 21:47:44.521606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.464 [2024-11-27 21:47:44.521684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.464 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.724 [2024-11-27 21:47:44.593255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:21.724 [2024-11-27 21:47:44.593315] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.724 [2024-11-27 21:47:44.593332] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:21.724 [2024-11-27 21:47:44.593341] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.724 [2024-11-27 21:47:44.595211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.724 [2024-11-27 21:47:44.595245] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:21.724 [2024-11-27 21:47:44.595294] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:21.724 [2024-11-27 21:47:44.595324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:21.724 [2024-11-27 21:47:44.595390] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:21.724 [2024-11-27 21:47:44.595397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:21.724 [2024-11-27 21:47:44.595457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:21.724 [2024-11-27 21:47:44.595526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:21.724 [2024-11-27 21:47:44.595535] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:21.724 [2024-11-27 21:47:44.595592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.724 pt2 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.724 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.724 "name": "raid_bdev1", 00:15:21.724 "uuid": "5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd", 00:15:21.724 "strip_size_kb": 0, 00:15:21.724 "state": "online", 00:15:21.724 "raid_level": "raid1", 00:15:21.724 "superblock": true, 00:15:21.724 "num_base_bdevs": 2, 00:15:21.724 "num_base_bdevs_discovered": 1, 00:15:21.724 "num_base_bdevs_operational": 1, 00:15:21.724 "base_bdevs_list": [ 00:15:21.724 { 00:15:21.724 "name": null, 00:15:21.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.724 "is_configured": false, 00:15:21.724 "data_offset": 256, 00:15:21.724 "data_size": 7936 00:15:21.724 }, 00:15:21.724 { 00:15:21.724 "name": "pt2", 00:15:21.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.725 "is_configured": true, 00:15:21.725 "data_offset": 256, 00:15:21.725 "data_size": 7936 00:15:21.725 } 00:15:21.725 ] 00:15:21.725 }' 00:15:21.725 21:47:44 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.725 21:47:44 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.984 [2024-11-27 21:47:45.024558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:21.984 [2024-11-27 21:47:45.024631] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:21.984 [2024-11-27 21:47:45.024713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.984 [2024-11-27 21:47:45.024772] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.984 [2024-11-27 21:47:45.024858] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.984 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:21.984 [2024-11-27 21:47:45.080477] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:21.984 [2024-11-27 21:47:45.080587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.984 [2024-11-27 21:47:45.080622] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:21.984 [2024-11-27 21:47:45.080667] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.984 [2024-11-27 21:47:45.082561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.984 [2024-11-27 21:47:45.082637] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:21.985 [2024-11-27 21:47:45.082706] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:21.985 [2024-11-27 21:47:45.082776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:21.985 [2024-11-27 21:47:45.082930] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:21.985 [2024-11-27 21:47:45.082987] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:21.985 [2024-11-27 21:47:45.083016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:21.985 [2024-11-27 21:47:45.083100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:21.985 [2024-11-27 21:47:45.083204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:21.985 [2024-11-27 21:47:45.083241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:21.985 [2024-11-27 21:47:45.083350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:21.985 [2024-11-27 21:47:45.083469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:21.985 [2024-11-27 21:47:45.083508] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:21.985 [2024-11-27 21:47:45.083637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.985 pt1 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.985 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.244 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.244 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.244 "name": "raid_bdev1", 00:15:22.244 "uuid": "5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd", 00:15:22.244 "strip_size_kb": 0, 00:15:22.244 "state": "online", 00:15:22.244 "raid_level": "raid1", 00:15:22.244 "superblock": true, 00:15:22.244 "num_base_bdevs": 2, 00:15:22.244 "num_base_bdevs_discovered": 1, 00:15:22.244 "num_base_bdevs_operational": 1, 00:15:22.244 "base_bdevs_list": [ 00:15:22.244 { 00:15:22.244 "name": null, 00:15:22.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.244 "is_configured": false, 00:15:22.244 "data_offset": 256, 00:15:22.244 "data_size": 7936 00:15:22.244 }, 00:15:22.244 { 00:15:22.244 "name": "pt2", 00:15:22.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:22.244 "is_configured": true, 00:15:22.244 "data_offset": 256, 00:15:22.244 "data_size": 7936 00:15:22.244 } 00:15:22.244 ] 00:15:22.244 }' 00:15:22.244 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.244 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.504 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:22.504 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.504 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.504 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:22.504 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.504 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:22.504 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:22.504 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:22.504 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.504 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:22.504 [2024-11-27 21:47:45.615875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.763 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.763 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd '!=' 5f824ba4-cea4-4a20-81b1-4bf5b3c9edbd ']' 00:15:22.763 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 97433 00:15:22.763 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 97433 ']' 00:15:22.763 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 97433 00:15:22.763 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:15:22.763 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.763 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97433 00:15:22.763 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.763 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.763 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97433' 00:15:22.763 killing process with pid 97433 00:15:22.763 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 97433 00:15:22.763 [2024-11-27 21:47:45.692838] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.763 [2024-11-27 21:47:45.692903] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.763 [2024-11-27 21:47:45.692943] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.763 [2024-11-27 21:47:45.692952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:22.763 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 97433 00:15:22.763 [2024-11-27 21:47:45.716752] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.023 21:47:45 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:15:23.023 00:15:23.023 real 0m4.972s 00:15:23.023 user 0m8.141s 00:15:23.023 sys 0m1.100s 00:15:23.023 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:23.023 ************************************ 00:15:23.023 END TEST raid_superblock_test_md_separate 00:15:23.023 ************************************ 00:15:23.023 21:47:45 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.023 21:47:45 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:15:23.023 21:47:45 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:15:23.023 21:47:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:23.023 21:47:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:23.023 21:47:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:23.023 ************************************ 00:15:23.023 START TEST raid_rebuild_test_sb_md_separate 00:15:23.023 ************************************ 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=97750 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 97750 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 97750 ']' 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.023 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.023 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:23.024 Zero copy mechanism will not be used. 00:15:23.024 [2024-11-27 21:47:46.112135] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:23.024 [2024-11-27 21:47:46.112255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97750 ] 00:15:23.283 [2024-11-27 21:47:46.243242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.283 [2024-11-27 21:47:46.267400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.283 [2024-11-27 21:47:46.309927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.283 [2024-11-27 21:47:46.309967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.852 BaseBdev1_malloc 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.852 [2024-11-27 21:47:46.933952] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:23.852 [2024-11-27 21:47:46.934021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.852 [2024-11-27 21:47:46.934047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:23.852 [2024-11-27 21:47:46.934064] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.852 [2024-11-27 21:47:46.935907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.852 [2024-11-27 21:47:46.936024] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:23.852 BaseBdev1 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.852 BaseBdev2_malloc 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:23.852 [2024-11-27 21:47:46.959070] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:23.852 [2024-11-27 21:47:46.959176] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.852 [2024-11-27 21:47:46.959202] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:23.852 [2024-11-27 21:47:46.959211] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.852 [2024-11-27 21:47:46.961109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.852 [2024-11-27 21:47:46.961143] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:23.852 BaseBdev2 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.852 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.112 spare_malloc 00:15:24.112 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.112 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:24.112 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.112 21:47:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.112 spare_delay 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.112 [2024-11-27 21:47:47.011499] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:24.112 [2024-11-27 21:47:47.011552] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.112 [2024-11-27 21:47:47.011572] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:24.112 [2024-11-27 21:47:47.011581] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.112 [2024-11-27 21:47:47.013534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.112 [2024-11-27 21:47:47.013568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:24.112 spare 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.112 [2024-11-27 21:47:47.023518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.112 [2024-11-27 21:47:47.025371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.112 [2024-11-27 21:47:47.025521] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:24.112 [2024-11-27 21:47:47.025543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:24.112 [2024-11-27 21:47:47.025619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:24.112 [2024-11-27 21:47:47.025715] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:24.112 [2024-11-27 21:47:47.025726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:24.112 [2024-11-27 21:47:47.025801] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.112 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.112 "name": "raid_bdev1", 00:15:24.112 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:24.112 "strip_size_kb": 0, 00:15:24.112 "state": "online", 00:15:24.112 "raid_level": "raid1", 00:15:24.112 "superblock": true, 00:15:24.112 "num_base_bdevs": 2, 00:15:24.112 "num_base_bdevs_discovered": 2, 00:15:24.112 "num_base_bdevs_operational": 2, 00:15:24.113 "base_bdevs_list": [ 00:15:24.113 { 00:15:24.113 "name": "BaseBdev1", 00:15:24.113 "uuid": "6964ec01-74e3-5b2d-98a2-675578eeb4b1", 00:15:24.113 "is_configured": true, 00:15:24.113 "data_offset": 256, 00:15:24.113 "data_size": 7936 00:15:24.113 }, 00:15:24.113 { 00:15:24.113 "name": "BaseBdev2", 00:15:24.113 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:24.113 "is_configured": true, 00:15:24.113 "data_offset": 256, 00:15:24.113 "data_size": 7936 00:15:24.113 } 00:15:24.113 ] 00:15:24.113 }' 00:15:24.113 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.113 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.372 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:24.632 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:24.632 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.632 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.632 [2024-11-27 21:47:47.498951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.632 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.632 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:24.632 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.632 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:24.633 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:24.892 [2024-11-27 21:47:47.770289] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:24.893 /dev/nbd0 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:24.893 1+0 records in 00:15:24.893 1+0 records out 00:15:24.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041548 s, 9.9 MB/s 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:24.893 21:47:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:15:25.462 7936+0 records in 00:15:25.462 7936+0 records out 00:15:25.462 32505856 bytes (33 MB, 31 MiB) copied, 0.612806 s, 53.0 MB/s 00:15:25.462 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:25.462 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.462 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:25.462 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:25.462 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:25.462 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.462 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:25.721 [2024-11-27 21:47:48.674758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.721 [2024-11-27 21:47:48.690836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.721 "name": "raid_bdev1", 00:15:25.721 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:25.721 "strip_size_kb": 0, 00:15:25.721 "state": "online", 00:15:25.721 "raid_level": "raid1", 00:15:25.721 "superblock": true, 00:15:25.721 "num_base_bdevs": 2, 00:15:25.721 "num_base_bdevs_discovered": 1, 00:15:25.721 "num_base_bdevs_operational": 1, 00:15:25.721 "base_bdevs_list": [ 00:15:25.721 { 00:15:25.721 "name": null, 00:15:25.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.721 "is_configured": false, 00:15:25.721 "data_offset": 0, 00:15:25.721 "data_size": 7936 00:15:25.721 }, 00:15:25.721 { 00:15:25.721 "name": "BaseBdev2", 00:15:25.721 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:25.721 "is_configured": true, 00:15:25.721 "data_offset": 256, 00:15:25.721 "data_size": 7936 00:15:25.721 } 00:15:25.721 ] 00:15:25.721 }' 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.721 21:47:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.300 21:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:26.300 21:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.300 21:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:26.300 [2024-11-27 21:47:49.134072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.300 [2024-11-27 21:47:49.136712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019c960 00:15:26.300 [2024-11-27 21:47:49.138551] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:26.300 21:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.300 21:47:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:27.240 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.240 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.240 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.240 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.240 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.240 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.240 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.240 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.240 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.240 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.240 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.240 "name": "raid_bdev1", 00:15:27.240 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:27.240 "strip_size_kb": 0, 00:15:27.240 "state": "online", 00:15:27.240 "raid_level": "raid1", 00:15:27.240 "superblock": true, 00:15:27.240 "num_base_bdevs": 2, 00:15:27.240 "num_base_bdevs_discovered": 2, 00:15:27.240 "num_base_bdevs_operational": 2, 00:15:27.240 "process": { 00:15:27.240 "type": "rebuild", 00:15:27.240 "target": "spare", 00:15:27.240 "progress": { 00:15:27.240 "blocks": 2560, 00:15:27.240 "percent": 32 00:15:27.240 } 00:15:27.240 }, 00:15:27.240 "base_bdevs_list": [ 00:15:27.240 { 00:15:27.240 "name": "spare", 00:15:27.241 "uuid": "09820229-9c12-5e7c-b5e2-1e053549689a", 00:15:27.241 "is_configured": true, 00:15:27.241 "data_offset": 256, 00:15:27.241 "data_size": 7936 00:15:27.241 }, 00:15:27.241 { 00:15:27.241 "name": "BaseBdev2", 00:15:27.241 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:27.241 "is_configured": true, 00:15:27.241 "data_offset": 256, 00:15:27.241 "data_size": 7936 00:15:27.241 } 00:15:27.241 ] 00:15:27.241 }' 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.241 [2024-11-27 21:47:50.277831] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.241 [2024-11-27 21:47:50.343240] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:27.241 [2024-11-27 21:47:50.343342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.241 [2024-11-27 21:47:50.343362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.241 [2024-11-27 21:47:50.343369] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.241 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.500 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.500 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.500 "name": "raid_bdev1", 00:15:27.500 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:27.500 "strip_size_kb": 0, 00:15:27.500 "state": "online", 00:15:27.500 "raid_level": "raid1", 00:15:27.500 "superblock": true, 00:15:27.500 "num_base_bdevs": 2, 00:15:27.500 "num_base_bdevs_discovered": 1, 00:15:27.500 "num_base_bdevs_operational": 1, 00:15:27.500 "base_bdevs_list": [ 00:15:27.500 { 00:15:27.500 "name": null, 00:15:27.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.500 "is_configured": false, 00:15:27.500 "data_offset": 0, 00:15:27.500 "data_size": 7936 00:15:27.500 }, 00:15:27.500 { 00:15:27.500 "name": "BaseBdev2", 00:15:27.500 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:27.500 "is_configured": true, 00:15:27.500 "data_offset": 256, 00:15:27.500 "data_size": 7936 00:15:27.500 } 00:15:27.500 ] 00:15:27.500 }' 00:15:27.500 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.500 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.759 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.759 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.759 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.759 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.759 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.759 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.759 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.759 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.759 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:27.759 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.759 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.759 "name": "raid_bdev1", 00:15:27.759 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:27.759 "strip_size_kb": 0, 00:15:27.759 "state": "online", 00:15:27.759 "raid_level": "raid1", 00:15:27.759 "superblock": true, 00:15:27.759 "num_base_bdevs": 2, 00:15:27.759 "num_base_bdevs_discovered": 1, 00:15:27.759 "num_base_bdevs_operational": 1, 00:15:27.759 "base_bdevs_list": [ 00:15:27.759 { 00:15:27.759 "name": null, 00:15:27.759 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.759 "is_configured": false, 00:15:27.759 "data_offset": 0, 00:15:27.759 "data_size": 7936 00:15:27.759 }, 00:15:27.759 { 00:15:27.759 "name": "BaseBdev2", 00:15:27.759 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:27.759 "is_configured": true, 00:15:27.759 "data_offset": 256, 00:15:27.759 "data_size": 7936 00:15:27.759 } 00:15:27.759 ] 00:15:27.759 }' 00:15:27.759 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.018 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:28.018 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.018 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.018 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:28.018 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.018 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.018 [2024-11-27 21:47:50.961448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:28.018 [2024-11-27 21:47:50.963838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00019ca30 00:15:28.018 [2024-11-27 21:47:50.965656] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:28.018 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.018 21:47:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:28.955 21:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.955 21:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.955 21:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.955 21:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.955 21:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.955 21:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.955 21:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.955 21:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.955 21:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:28.955 21:47:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.955 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.955 "name": "raid_bdev1", 00:15:28.955 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:28.955 "strip_size_kb": 0, 00:15:28.955 "state": "online", 00:15:28.955 "raid_level": "raid1", 00:15:28.955 "superblock": true, 00:15:28.955 "num_base_bdevs": 2, 00:15:28.955 "num_base_bdevs_discovered": 2, 00:15:28.955 "num_base_bdevs_operational": 2, 00:15:28.955 "process": { 00:15:28.955 "type": "rebuild", 00:15:28.955 "target": "spare", 00:15:28.955 "progress": { 00:15:28.955 "blocks": 2560, 00:15:28.955 "percent": 32 00:15:28.955 } 00:15:28.955 }, 00:15:28.955 "base_bdevs_list": [ 00:15:28.955 { 00:15:28.955 "name": "spare", 00:15:28.955 "uuid": "09820229-9c12-5e7c-b5e2-1e053549689a", 00:15:28.955 "is_configured": true, 00:15:28.955 "data_offset": 256, 00:15:28.955 "data_size": 7936 00:15:28.955 }, 00:15:28.955 { 00:15:28.955 "name": "BaseBdev2", 00:15:28.955 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:28.955 "is_configured": true, 00:15:28.955 "data_offset": 256, 00:15:28.955 "data_size": 7936 00:15:28.955 } 00:15:28.956 ] 00:15:28.956 }' 00:15:28.956 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.956 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:29.215 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=581 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.215 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.215 "name": "raid_bdev1", 00:15:29.215 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:29.215 "strip_size_kb": 0, 00:15:29.215 "state": "online", 00:15:29.215 "raid_level": "raid1", 00:15:29.215 "superblock": true, 00:15:29.215 "num_base_bdevs": 2, 00:15:29.215 "num_base_bdevs_discovered": 2, 00:15:29.215 "num_base_bdevs_operational": 2, 00:15:29.215 "process": { 00:15:29.215 "type": "rebuild", 00:15:29.215 "target": "spare", 00:15:29.215 "progress": { 00:15:29.215 "blocks": 2816, 00:15:29.215 "percent": 35 00:15:29.215 } 00:15:29.215 }, 00:15:29.215 "base_bdevs_list": [ 00:15:29.215 { 00:15:29.215 "name": "spare", 00:15:29.215 "uuid": "09820229-9c12-5e7c-b5e2-1e053549689a", 00:15:29.215 "is_configured": true, 00:15:29.215 "data_offset": 256, 00:15:29.215 "data_size": 7936 00:15:29.215 }, 00:15:29.215 { 00:15:29.215 "name": "BaseBdev2", 00:15:29.215 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:29.215 "is_configured": true, 00:15:29.215 "data_offset": 256, 00:15:29.215 "data_size": 7936 00:15:29.215 } 00:15:29.215 ] 00:15:29.216 }' 00:15:29.216 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.216 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.216 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.216 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.216 21:47:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.153 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.153 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.153 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.153 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.153 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.153 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.153 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.153 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.153 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.153 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:30.153 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.413 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.413 "name": "raid_bdev1", 00:15:30.413 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:30.413 "strip_size_kb": 0, 00:15:30.413 "state": "online", 00:15:30.413 "raid_level": "raid1", 00:15:30.413 "superblock": true, 00:15:30.413 "num_base_bdevs": 2, 00:15:30.413 "num_base_bdevs_discovered": 2, 00:15:30.413 "num_base_bdevs_operational": 2, 00:15:30.413 "process": { 00:15:30.413 "type": "rebuild", 00:15:30.413 "target": "spare", 00:15:30.413 "progress": { 00:15:30.413 "blocks": 5632, 00:15:30.413 "percent": 70 00:15:30.413 } 00:15:30.413 }, 00:15:30.413 "base_bdevs_list": [ 00:15:30.413 { 00:15:30.413 "name": "spare", 00:15:30.413 "uuid": "09820229-9c12-5e7c-b5e2-1e053549689a", 00:15:30.413 "is_configured": true, 00:15:30.413 "data_offset": 256, 00:15:30.413 "data_size": 7936 00:15:30.413 }, 00:15:30.413 { 00:15:30.413 "name": "BaseBdev2", 00:15:30.413 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:30.413 "is_configured": true, 00:15:30.413 "data_offset": 256, 00:15:30.413 "data_size": 7936 00:15:30.413 } 00:15:30.413 ] 00:15:30.413 }' 00:15:30.413 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.413 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.413 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.413 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.413 21:47:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.981 [2024-11-27 21:47:54.076171] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:30.981 [2024-11-27 21:47:54.076250] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:30.981 [2024-11-27 21:47:54.076357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.573 "name": "raid_bdev1", 00:15:31.573 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:31.573 "strip_size_kb": 0, 00:15:31.573 "state": "online", 00:15:31.573 "raid_level": "raid1", 00:15:31.573 "superblock": true, 00:15:31.573 "num_base_bdevs": 2, 00:15:31.573 "num_base_bdevs_discovered": 2, 00:15:31.573 "num_base_bdevs_operational": 2, 00:15:31.573 "base_bdevs_list": [ 00:15:31.573 { 00:15:31.573 "name": "spare", 00:15:31.573 "uuid": "09820229-9c12-5e7c-b5e2-1e053549689a", 00:15:31.573 "is_configured": true, 00:15:31.573 "data_offset": 256, 00:15:31.573 "data_size": 7936 00:15:31.573 }, 00:15:31.573 { 00:15:31.573 "name": "BaseBdev2", 00:15:31.573 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:31.573 "is_configured": true, 00:15:31.573 "data_offset": 256, 00:15:31.573 "data_size": 7936 00:15:31.573 } 00:15:31.573 ] 00:15:31.573 }' 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.573 "name": "raid_bdev1", 00:15:31.573 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:31.573 "strip_size_kb": 0, 00:15:31.573 "state": "online", 00:15:31.573 "raid_level": "raid1", 00:15:31.573 "superblock": true, 00:15:31.573 "num_base_bdevs": 2, 00:15:31.573 "num_base_bdevs_discovered": 2, 00:15:31.573 "num_base_bdevs_operational": 2, 00:15:31.573 "base_bdevs_list": [ 00:15:31.573 { 00:15:31.573 "name": "spare", 00:15:31.573 "uuid": "09820229-9c12-5e7c-b5e2-1e053549689a", 00:15:31.573 "is_configured": true, 00:15:31.573 "data_offset": 256, 00:15:31.573 "data_size": 7936 00:15:31.573 }, 00:15:31.573 { 00:15:31.573 "name": "BaseBdev2", 00:15:31.573 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:31.573 "is_configured": true, 00:15:31.573 "data_offset": 256, 00:15:31.573 "data_size": 7936 00:15:31.573 } 00:15:31.573 ] 00:15:31.573 }' 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.573 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.897 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.897 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.897 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.897 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.897 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.897 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:31.897 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.897 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.897 "name": "raid_bdev1", 00:15:31.897 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:31.897 "strip_size_kb": 0, 00:15:31.897 "state": "online", 00:15:31.897 "raid_level": "raid1", 00:15:31.897 "superblock": true, 00:15:31.897 "num_base_bdevs": 2, 00:15:31.897 "num_base_bdevs_discovered": 2, 00:15:31.897 "num_base_bdevs_operational": 2, 00:15:31.897 "base_bdevs_list": [ 00:15:31.897 { 00:15:31.897 "name": "spare", 00:15:31.897 "uuid": "09820229-9c12-5e7c-b5e2-1e053549689a", 00:15:31.897 "is_configured": true, 00:15:31.897 "data_offset": 256, 00:15:31.897 "data_size": 7936 00:15:31.897 }, 00:15:31.897 { 00:15:31.897 "name": "BaseBdev2", 00:15:31.897 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:31.897 "is_configured": true, 00:15:31.897 "data_offset": 256, 00:15:31.897 "data_size": 7936 00:15:31.897 } 00:15:31.897 ] 00:15:31.897 }' 00:15:31.897 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.897 21:47:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.163 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.164 [2024-11-27 21:47:55.141159] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:32.164 [2024-11-27 21:47:55.141239] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:32.164 [2024-11-27 21:47:55.141335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:32.164 [2024-11-27 21:47:55.141408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:32.164 [2024-11-27 21:47:55.141427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:32.164 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:32.423 /dev/nbd0 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.423 1+0 records in 00:15:32.423 1+0 records out 00:15:32.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000323853 s, 12.6 MB/s 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:32.423 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:32.683 /dev/nbd1 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.683 1+0 records in 00:15:32.683 1+0 records out 00:15:32.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044246 s, 9.3 MB/s 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.683 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:32.942 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:32.942 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:32.942 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:32.942 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.942 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.942 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:32.942 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:32.942 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.942 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.942 21:47:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.202 [2024-11-27 21:47:56.177597] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:33.202 [2024-11-27 21:47:56.177651] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.202 [2024-11-27 21:47:56.177670] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:33.202 [2024-11-27 21:47:56.177681] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.202 [2024-11-27 21:47:56.179597] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.202 [2024-11-27 21:47:56.179637] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:33.202 [2024-11-27 21:47:56.179686] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:33.202 [2024-11-27 21:47:56.179730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:33.202 [2024-11-27 21:47:56.179867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.202 spare 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.202 [2024-11-27 21:47:56.279752] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:33.202 [2024-11-27 21:47:56.279822] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:15:33.202 [2024-11-27 21:47:56.279914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb1b0 00:15:33.202 [2024-11-27 21:47:56.280005] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:33.202 [2024-11-27 21:47:56.280015] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:33.202 [2024-11-27 21:47:56.280135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.202 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.461 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.462 "name": "raid_bdev1", 00:15:33.462 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:33.462 "strip_size_kb": 0, 00:15:33.462 "state": "online", 00:15:33.462 "raid_level": "raid1", 00:15:33.462 "superblock": true, 00:15:33.462 "num_base_bdevs": 2, 00:15:33.462 "num_base_bdevs_discovered": 2, 00:15:33.462 "num_base_bdevs_operational": 2, 00:15:33.462 "base_bdevs_list": [ 00:15:33.462 { 00:15:33.462 "name": "spare", 00:15:33.462 "uuid": "09820229-9c12-5e7c-b5e2-1e053549689a", 00:15:33.462 "is_configured": true, 00:15:33.462 "data_offset": 256, 00:15:33.462 "data_size": 7936 00:15:33.462 }, 00:15:33.462 { 00:15:33.462 "name": "BaseBdev2", 00:15:33.462 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:33.462 "is_configured": true, 00:15:33.462 "data_offset": 256, 00:15:33.462 "data_size": 7936 00:15:33.462 } 00:15:33.462 ] 00:15:33.462 }' 00:15:33.462 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.462 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.721 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:33.721 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.721 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:33.721 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:33.721 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.721 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.721 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.721 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.721 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.721 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.721 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.721 "name": "raid_bdev1", 00:15:33.721 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:33.721 "strip_size_kb": 0, 00:15:33.721 "state": "online", 00:15:33.721 "raid_level": "raid1", 00:15:33.721 "superblock": true, 00:15:33.721 "num_base_bdevs": 2, 00:15:33.721 "num_base_bdevs_discovered": 2, 00:15:33.721 "num_base_bdevs_operational": 2, 00:15:33.721 "base_bdevs_list": [ 00:15:33.721 { 00:15:33.721 "name": "spare", 00:15:33.721 "uuid": "09820229-9c12-5e7c-b5e2-1e053549689a", 00:15:33.721 "is_configured": true, 00:15:33.721 "data_offset": 256, 00:15:33.721 "data_size": 7936 00:15:33.721 }, 00:15:33.721 { 00:15:33.721 "name": "BaseBdev2", 00:15:33.721 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:33.721 "is_configured": true, 00:15:33.721 "data_offset": 256, 00:15:33.721 "data_size": 7936 00:15:33.721 } 00:15:33.721 ] 00:15:33.721 }' 00:15:33.721 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.980 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:33.980 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.981 [2024-11-27 21:47:56.972288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.981 21:47:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:33.981 21:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.981 21:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.981 "name": "raid_bdev1", 00:15:33.981 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:33.981 "strip_size_kb": 0, 00:15:33.981 "state": "online", 00:15:33.981 "raid_level": "raid1", 00:15:33.981 "superblock": true, 00:15:33.981 "num_base_bdevs": 2, 00:15:33.981 "num_base_bdevs_discovered": 1, 00:15:33.981 "num_base_bdevs_operational": 1, 00:15:33.981 "base_bdevs_list": [ 00:15:33.981 { 00:15:33.981 "name": null, 00:15:33.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.981 "is_configured": false, 00:15:33.981 "data_offset": 0, 00:15:33.981 "data_size": 7936 00:15:33.981 }, 00:15:33.981 { 00:15:33.981 "name": "BaseBdev2", 00:15:33.981 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:33.981 "is_configured": true, 00:15:33.981 "data_offset": 256, 00:15:33.981 "data_size": 7936 00:15:33.981 } 00:15:33.981 ] 00:15:33.981 }' 00:15:33.981 21:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.981 21:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.549 21:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:34.549 21:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.549 21:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:34.549 [2024-11-27 21:47:57.439526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.549 [2024-11-27 21:47:57.439713] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:34.549 [2024-11-27 21:47:57.439778] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:34.549 [2024-11-27 21:47:57.439842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:34.549 [2024-11-27 21:47:57.442337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb280 00:15:34.549 [2024-11-27 21:47:57.444207] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:34.549 21:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.549 21:47:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:35.488 "name": "raid_bdev1", 00:15:35.488 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:35.488 "strip_size_kb": 0, 00:15:35.488 "state": "online", 00:15:35.488 "raid_level": "raid1", 00:15:35.488 "superblock": true, 00:15:35.488 "num_base_bdevs": 2, 00:15:35.488 "num_base_bdevs_discovered": 2, 00:15:35.488 "num_base_bdevs_operational": 2, 00:15:35.488 "process": { 00:15:35.488 "type": "rebuild", 00:15:35.488 "target": "spare", 00:15:35.488 "progress": { 00:15:35.488 "blocks": 2560, 00:15:35.488 "percent": 32 00:15:35.488 } 00:15:35.488 }, 00:15:35.488 "base_bdevs_list": [ 00:15:35.488 { 00:15:35.488 "name": "spare", 00:15:35.488 "uuid": "09820229-9c12-5e7c-b5e2-1e053549689a", 00:15:35.488 "is_configured": true, 00:15:35.488 "data_offset": 256, 00:15:35.488 "data_size": 7936 00:15:35.488 }, 00:15:35.488 { 00:15:35.488 "name": "BaseBdev2", 00:15:35.488 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:35.488 "is_configured": true, 00:15:35.488 "data_offset": 256, 00:15:35.488 "data_size": 7936 00:15:35.488 } 00:15:35.488 ] 00:15:35.488 }' 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.488 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.488 [2024-11-27 21:47:58.607042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.748 [2024-11-27 21:47:58.648308] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:35.748 [2024-11-27 21:47:58.648393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:35.748 [2024-11-27 21:47:58.648412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:35.748 [2024-11-27 21:47:58.648420] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.748 "name": "raid_bdev1", 00:15:35.748 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:35.748 "strip_size_kb": 0, 00:15:35.748 "state": "online", 00:15:35.748 "raid_level": "raid1", 00:15:35.748 "superblock": true, 00:15:35.748 "num_base_bdevs": 2, 00:15:35.748 "num_base_bdevs_discovered": 1, 00:15:35.748 "num_base_bdevs_operational": 1, 00:15:35.748 "base_bdevs_list": [ 00:15:35.748 { 00:15:35.748 "name": null, 00:15:35.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.748 "is_configured": false, 00:15:35.748 "data_offset": 0, 00:15:35.748 "data_size": 7936 00:15:35.748 }, 00:15:35.748 { 00:15:35.748 "name": "BaseBdev2", 00:15:35.748 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:35.748 "is_configured": true, 00:15:35.748 "data_offset": 256, 00:15:35.748 "data_size": 7936 00:15:35.748 } 00:15:35.748 ] 00:15:35.748 }' 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.748 21:47:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.008 21:47:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:36.008 21:47:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.008 21:47:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:36.008 [2024-11-27 21:47:59.086423] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:36.008 [2024-11-27 21:47:59.086521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.008 [2024-11-27 21:47:59.086566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:36.008 [2024-11-27 21:47:59.086594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.008 [2024-11-27 21:47:59.086867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.008 [2024-11-27 21:47:59.086916] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:36.008 [2024-11-27 21:47:59.087008] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:36.008 [2024-11-27 21:47:59.087043] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:36.008 [2024-11-27 21:47:59.087108] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:36.008 [2024-11-27 21:47:59.087159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:36.008 [2024-11-27 21:47:59.089299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001bb350 00:15:36.008 [2024-11-27 21:47:59.091158] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:36.008 spare 00:15:36.008 21:47:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.008 21:47:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.387 "name": "raid_bdev1", 00:15:37.387 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:37.387 "strip_size_kb": 0, 00:15:37.387 "state": "online", 00:15:37.387 "raid_level": "raid1", 00:15:37.387 "superblock": true, 00:15:37.387 "num_base_bdevs": 2, 00:15:37.387 "num_base_bdevs_discovered": 2, 00:15:37.387 "num_base_bdevs_operational": 2, 00:15:37.387 "process": { 00:15:37.387 "type": "rebuild", 00:15:37.387 "target": "spare", 00:15:37.387 "progress": { 00:15:37.387 "blocks": 2560, 00:15:37.387 "percent": 32 00:15:37.387 } 00:15:37.387 }, 00:15:37.387 "base_bdevs_list": [ 00:15:37.387 { 00:15:37.387 "name": "spare", 00:15:37.387 "uuid": "09820229-9c12-5e7c-b5e2-1e053549689a", 00:15:37.387 "is_configured": true, 00:15:37.387 "data_offset": 256, 00:15:37.387 "data_size": 7936 00:15:37.387 }, 00:15:37.387 { 00:15:37.387 "name": "BaseBdev2", 00:15:37.387 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:37.387 "is_configured": true, 00:15:37.387 "data_offset": 256, 00:15:37.387 "data_size": 7936 00:15:37.387 } 00:15:37.387 ] 00:15:37.387 }' 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.387 [2024-11-27 21:48:00.230074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.387 [2024-11-27 21:48:00.295292] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:37.387 [2024-11-27 21:48:00.295348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.387 [2024-11-27 21:48:00.295361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:37.387 [2024-11-27 21:48:00.295370] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.387 "name": "raid_bdev1", 00:15:37.387 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:37.387 "strip_size_kb": 0, 00:15:37.387 "state": "online", 00:15:37.387 "raid_level": "raid1", 00:15:37.387 "superblock": true, 00:15:37.387 "num_base_bdevs": 2, 00:15:37.387 "num_base_bdevs_discovered": 1, 00:15:37.387 "num_base_bdevs_operational": 1, 00:15:37.387 "base_bdevs_list": [ 00:15:37.387 { 00:15:37.387 "name": null, 00:15:37.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.387 "is_configured": false, 00:15:37.387 "data_offset": 0, 00:15:37.387 "data_size": 7936 00:15:37.387 }, 00:15:37.387 { 00:15:37.387 "name": "BaseBdev2", 00:15:37.387 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:37.387 "is_configured": true, 00:15:37.387 "data_offset": 256, 00:15:37.387 "data_size": 7936 00:15:37.387 } 00:15:37.387 ] 00:15:37.387 }' 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.387 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.648 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:37.648 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:37.648 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:37.648 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:37.648 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:37.648 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.648 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.648 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.648 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:37.908 "name": "raid_bdev1", 00:15:37.908 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:37.908 "strip_size_kb": 0, 00:15:37.908 "state": "online", 00:15:37.908 "raid_level": "raid1", 00:15:37.908 "superblock": true, 00:15:37.908 "num_base_bdevs": 2, 00:15:37.908 "num_base_bdevs_discovered": 1, 00:15:37.908 "num_base_bdevs_operational": 1, 00:15:37.908 "base_bdevs_list": [ 00:15:37.908 { 00:15:37.908 "name": null, 00:15:37.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.908 "is_configured": false, 00:15:37.908 "data_offset": 0, 00:15:37.908 "data_size": 7936 00:15:37.908 }, 00:15:37.908 { 00:15:37.908 "name": "BaseBdev2", 00:15:37.908 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:37.908 "is_configured": true, 00:15:37.908 "data_offset": 256, 00:15:37.908 "data_size": 7936 00:15:37.908 } 00:15:37.908 ] 00:15:37.908 }' 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:37.908 [2024-11-27 21:48:00.917041] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:37.908 [2024-11-27 21:48:00.917131] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.908 [2024-11-27 21:48:00.917156] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:37.908 [2024-11-27 21:48:00.917167] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.908 [2024-11-27 21:48:00.917357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.908 [2024-11-27 21:48:00.917374] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:37.908 [2024-11-27 21:48:00.917420] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:37.908 [2024-11-27 21:48:00.917438] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:37.908 [2024-11-27 21:48:00.917450] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:37.908 [2024-11-27 21:48:00.917460] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:37.908 BaseBdev1 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.908 21:48:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:38.845 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.105 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.105 "name": "raid_bdev1", 00:15:39.105 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:39.105 "strip_size_kb": 0, 00:15:39.105 "state": "online", 00:15:39.105 "raid_level": "raid1", 00:15:39.105 "superblock": true, 00:15:39.105 "num_base_bdevs": 2, 00:15:39.105 "num_base_bdevs_discovered": 1, 00:15:39.105 "num_base_bdevs_operational": 1, 00:15:39.105 "base_bdevs_list": [ 00:15:39.105 { 00:15:39.105 "name": null, 00:15:39.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.105 "is_configured": false, 00:15:39.105 "data_offset": 0, 00:15:39.105 "data_size": 7936 00:15:39.105 }, 00:15:39.105 { 00:15:39.105 "name": "BaseBdev2", 00:15:39.105 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:39.105 "is_configured": true, 00:15:39.105 "data_offset": 256, 00:15:39.105 "data_size": 7936 00:15:39.105 } 00:15:39.105 ] 00:15:39.105 }' 00:15:39.105 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.105 21:48:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.364 "name": "raid_bdev1", 00:15:39.364 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:39.364 "strip_size_kb": 0, 00:15:39.364 "state": "online", 00:15:39.364 "raid_level": "raid1", 00:15:39.364 "superblock": true, 00:15:39.364 "num_base_bdevs": 2, 00:15:39.364 "num_base_bdevs_discovered": 1, 00:15:39.364 "num_base_bdevs_operational": 1, 00:15:39.364 "base_bdevs_list": [ 00:15:39.364 { 00:15:39.364 "name": null, 00:15:39.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.364 "is_configured": false, 00:15:39.364 "data_offset": 0, 00:15:39.364 "data_size": 7936 00:15:39.364 }, 00:15:39.364 { 00:15:39.364 "name": "BaseBdev2", 00:15:39.364 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:39.364 "is_configured": true, 00:15:39.364 "data_offset": 256, 00:15:39.364 "data_size": 7936 00:15:39.364 } 00:15:39.364 ] 00:15:39.364 }' 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:39.364 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:39.624 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:39.624 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:39.624 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:39.624 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:39.624 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.624 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:39.624 [2024-11-27 21:48:02.490671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.624 [2024-11-27 21:48:02.490806] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:39.624 [2024-11-27 21:48:02.490818] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:39.624 request: 00:15:39.624 { 00:15:39.624 "base_bdev": "BaseBdev1", 00:15:39.624 "raid_bdev": "raid_bdev1", 00:15:39.624 "method": "bdev_raid_add_base_bdev", 00:15:39.624 "req_id": 1 00:15:39.624 } 00:15:39.624 Got JSON-RPC error response 00:15:39.624 response: 00:15:39.624 { 00:15:39.624 "code": -22, 00:15:39.624 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:39.624 } 00:15:39.624 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:39.624 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:15:39.624 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:39.624 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:39.624 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:39.624 21:48:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:40.562 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:40.562 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.562 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.562 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:40.562 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:40.562 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:40.563 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.563 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.563 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.563 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.563 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.563 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.563 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.563 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.563 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.563 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.563 "name": "raid_bdev1", 00:15:40.563 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:40.563 "strip_size_kb": 0, 00:15:40.563 "state": "online", 00:15:40.563 "raid_level": "raid1", 00:15:40.563 "superblock": true, 00:15:40.563 "num_base_bdevs": 2, 00:15:40.563 "num_base_bdevs_discovered": 1, 00:15:40.563 "num_base_bdevs_operational": 1, 00:15:40.563 "base_bdevs_list": [ 00:15:40.563 { 00:15:40.563 "name": null, 00:15:40.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.563 "is_configured": false, 00:15:40.563 "data_offset": 0, 00:15:40.563 "data_size": 7936 00:15:40.563 }, 00:15:40.563 { 00:15:40.563 "name": "BaseBdev2", 00:15:40.563 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:40.563 "is_configured": true, 00:15:40.563 "data_offset": 256, 00:15:40.563 "data_size": 7936 00:15:40.563 } 00:15:40.563 ] 00:15:40.563 }' 00:15:40.563 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.563 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:40.822 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.822 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.822 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:40.822 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:40.822 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.083 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.083 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.083 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.083 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.083 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.083 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.083 "name": "raid_bdev1", 00:15:41.083 "uuid": "bca13901-7b09-4e32-b4e4-6c0d7a1bccf9", 00:15:41.083 "strip_size_kb": 0, 00:15:41.083 "state": "online", 00:15:41.083 "raid_level": "raid1", 00:15:41.083 "superblock": true, 00:15:41.083 "num_base_bdevs": 2, 00:15:41.083 "num_base_bdevs_discovered": 1, 00:15:41.083 "num_base_bdevs_operational": 1, 00:15:41.083 "base_bdevs_list": [ 00:15:41.083 { 00:15:41.083 "name": null, 00:15:41.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.083 "is_configured": false, 00:15:41.083 "data_offset": 0, 00:15:41.083 "data_size": 7936 00:15:41.083 }, 00:15:41.083 { 00:15:41.083 "name": "BaseBdev2", 00:15:41.083 "uuid": "002067d8-1526-5d5c-897e-2143fa4d53fd", 00:15:41.083 "is_configured": true, 00:15:41.083 "data_offset": 256, 00:15:41.083 "data_size": 7936 00:15:41.083 } 00:15:41.083 ] 00:15:41.083 }' 00:15:41.083 21:48:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.083 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.083 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.083 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.083 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 97750 00:15:41.083 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 97750 ']' 00:15:41.083 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 97750 00:15:41.083 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:15:41.083 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.083 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97750 00:15:41.083 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.083 killing process with pid 97750 00:15:41.083 Received shutdown signal, test time was about 60.000000 seconds 00:15:41.083 00:15:41.083 Latency(us) 00:15:41.083 [2024-11-27T21:48:04.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:41.083 [2024-11-27T21:48:04.204Z] =================================================================================================================== 00:15:41.083 [2024-11-27T21:48:04.204Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:41.083 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.083 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97750' 00:15:41.083 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 97750 00:15:41.083 [2024-11-27 21:48:04.096884] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:41.083 [2024-11-27 21:48:04.096976] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.083 [2024-11-27 21:48:04.097021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.083 [2024-11-27 21:48:04.097030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:15:41.083 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 97750 00:15:41.083 [2024-11-27 21:48:04.129980] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:41.344 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:15:41.344 00:15:41.344 real 0m18.316s 00:15:41.344 user 0m24.281s 00:15:41.344 sys 0m2.695s 00:15:41.344 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:41.344 ************************************ 00:15:41.344 END TEST raid_rebuild_test_sb_md_separate 00:15:41.344 ************************************ 00:15:41.344 21:48:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:15:41.344 21:48:04 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:15:41.344 21:48:04 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:15:41.344 21:48:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:41.344 21:48:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:41.344 21:48:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:41.344 ************************************ 00:15:41.344 START TEST raid_state_function_test_sb_md_interleaved 00:15:41.344 ************************************ 00:15:41.344 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:15:41.344 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:15:41.344 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=98426 00:15:41.345 Process raid pid: 98426 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 98426' 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 98426 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 98426 ']' 00:15:41.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.345 21:48:04 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:41.605 [2024-11-27 21:48:04.519006] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:41.605 [2024-11-27 21:48:04.519165] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.605 [2024-11-27 21:48:04.676264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.605 [2024-11-27 21:48:04.701930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:41.864 [2024-11-27 21:48:04.745144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:41.864 [2024-11-27 21:48:04.745176] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:42.433 [2024-11-27 21:48:05.348116] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.433 [2024-11-27 21:48:05.348178] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.433 [2024-11-27 21:48:05.348196] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.433 [2024-11-27 21:48:05.348207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.433 "name": "Existed_Raid", 00:15:42.433 "uuid": "5aef423f-0fd0-4724-8112-c63f7bdd4736", 00:15:42.433 "strip_size_kb": 0, 00:15:42.433 "state": "configuring", 00:15:42.433 "raid_level": "raid1", 00:15:42.433 "superblock": true, 00:15:42.433 "num_base_bdevs": 2, 00:15:42.433 "num_base_bdevs_discovered": 0, 00:15:42.433 "num_base_bdevs_operational": 2, 00:15:42.433 "base_bdevs_list": [ 00:15:42.433 { 00:15:42.433 "name": "BaseBdev1", 00:15:42.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.433 "is_configured": false, 00:15:42.433 "data_offset": 0, 00:15:42.433 "data_size": 0 00:15:42.433 }, 00:15:42.433 { 00:15:42.433 "name": "BaseBdev2", 00:15:42.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.433 "is_configured": false, 00:15:42.433 "data_offset": 0, 00:15:42.433 "data_size": 0 00:15:42.433 } 00:15:42.433 ] 00:15:42.433 }' 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.433 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:42.693 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:42.693 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.693 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:42.693 [2024-11-27 21:48:05.791250] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:42.693 [2024-11-27 21:48:05.791284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name Existed_Raid, state configuring 00:15:42.693 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.693 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:42.693 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.693 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:42.693 [2024-11-27 21:48:05.799248] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:42.693 [2024-11-27 21:48:05.799325] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:42.693 [2024-11-27 21:48:05.799351] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:42.693 [2024-11-27 21:48:05.799384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:42.693 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.693 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:15:42.693 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.693 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:42.952 [2024-11-27 21:48:05.816367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.952 BaseBdev1 00:15:42.952 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.952 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:42.952 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:42.952 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:42.952 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:15:42.952 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:42.952 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:42.952 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:42.952 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.952 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:42.952 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.952 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:42.952 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:42.953 [ 00:15:42.953 { 00:15:42.953 "name": "BaseBdev1", 00:15:42.953 "aliases": [ 00:15:42.953 "9e8fb738-cca0-4ace-8f11-7b25e54a10df" 00:15:42.953 ], 00:15:42.953 "product_name": "Malloc disk", 00:15:42.953 "block_size": 4128, 00:15:42.953 "num_blocks": 8192, 00:15:42.953 "uuid": "9e8fb738-cca0-4ace-8f11-7b25e54a10df", 00:15:42.953 "md_size": 32, 00:15:42.953 "md_interleave": true, 00:15:42.953 "dif_type": 0, 00:15:42.953 "assigned_rate_limits": { 00:15:42.953 "rw_ios_per_sec": 0, 00:15:42.953 "rw_mbytes_per_sec": 0, 00:15:42.953 "r_mbytes_per_sec": 0, 00:15:42.953 "w_mbytes_per_sec": 0 00:15:42.953 }, 00:15:42.953 "claimed": true, 00:15:42.953 "claim_type": "exclusive_write", 00:15:42.953 "zoned": false, 00:15:42.953 "supported_io_types": { 00:15:42.953 "read": true, 00:15:42.953 "write": true, 00:15:42.953 "unmap": true, 00:15:42.953 "flush": true, 00:15:42.953 "reset": true, 00:15:42.953 "nvme_admin": false, 00:15:42.953 "nvme_io": false, 00:15:42.953 "nvme_io_md": false, 00:15:42.953 "write_zeroes": true, 00:15:42.953 "zcopy": true, 00:15:42.953 "get_zone_info": false, 00:15:42.953 "zone_management": false, 00:15:42.953 "zone_append": false, 00:15:42.953 "compare": false, 00:15:42.953 "compare_and_write": false, 00:15:42.953 "abort": true, 00:15:42.953 "seek_hole": false, 00:15:42.953 "seek_data": false, 00:15:42.953 "copy": true, 00:15:42.953 "nvme_iov_md": false 00:15:42.953 }, 00:15:42.953 "memory_domains": [ 00:15:42.953 { 00:15:42.953 "dma_device_id": "system", 00:15:42.953 "dma_device_type": 1 00:15:42.953 }, 00:15:42.953 { 00:15:42.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.953 "dma_device_type": 2 00:15:42.953 } 00:15:42.953 ], 00:15:42.953 "driver_specific": {} 00:15:42.953 } 00:15:42.953 ] 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.953 "name": "Existed_Raid", 00:15:42.953 "uuid": "83e6b30b-7fd6-462c-873a-d6a54bfca6a0", 00:15:42.953 "strip_size_kb": 0, 00:15:42.953 "state": "configuring", 00:15:42.953 "raid_level": "raid1", 00:15:42.953 "superblock": true, 00:15:42.953 "num_base_bdevs": 2, 00:15:42.953 "num_base_bdevs_discovered": 1, 00:15:42.953 "num_base_bdevs_operational": 2, 00:15:42.953 "base_bdevs_list": [ 00:15:42.953 { 00:15:42.953 "name": "BaseBdev1", 00:15:42.953 "uuid": "9e8fb738-cca0-4ace-8f11-7b25e54a10df", 00:15:42.953 "is_configured": true, 00:15:42.953 "data_offset": 256, 00:15:42.953 "data_size": 7936 00:15:42.953 }, 00:15:42.953 { 00:15:42.953 "name": "BaseBdev2", 00:15:42.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.953 "is_configured": false, 00:15:42.953 "data_offset": 0, 00:15:42.953 "data_size": 0 00:15:42.953 } 00:15:42.953 ] 00:15:42.953 }' 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.953 21:48:05 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:43.212 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.212 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 [2024-11-27 21:48:06.303905] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:43.212 [2024-11-27 21:48:06.303986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name Existed_Raid, state configuring 00:15:43.212 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.212 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:15:43.212 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.212 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.212 [2024-11-27 21:48:06.315929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:43.212 [2024-11-27 21:48:06.317766] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:43.212 [2024-11-27 21:48:06.317818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:43.212 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.212 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:43.212 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:43.212 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:15:43.212 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.213 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.213 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.213 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.213 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.213 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.213 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.213 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.213 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.213 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.213 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.213 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.213 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.472 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.472 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.472 "name": "Existed_Raid", 00:15:43.472 "uuid": "aa07e970-7504-4cea-8b32-4afeebb1202c", 00:15:43.472 "strip_size_kb": 0, 00:15:43.472 "state": "configuring", 00:15:43.472 "raid_level": "raid1", 00:15:43.472 "superblock": true, 00:15:43.472 "num_base_bdevs": 2, 00:15:43.472 "num_base_bdevs_discovered": 1, 00:15:43.472 "num_base_bdevs_operational": 2, 00:15:43.472 "base_bdevs_list": [ 00:15:43.472 { 00:15:43.472 "name": "BaseBdev1", 00:15:43.472 "uuid": "9e8fb738-cca0-4ace-8f11-7b25e54a10df", 00:15:43.472 "is_configured": true, 00:15:43.472 "data_offset": 256, 00:15:43.472 "data_size": 7936 00:15:43.472 }, 00:15:43.472 { 00:15:43.472 "name": "BaseBdev2", 00:15:43.472 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.472 "is_configured": false, 00:15:43.472 "data_offset": 0, 00:15:43.472 "data_size": 0 00:15:43.472 } 00:15:43.472 ] 00:15:43.472 }' 00:15:43.472 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.472 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.732 [2024-11-27 21:48:06.750134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.732 [2024-11-27 21:48:06.750364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:43.732 [2024-11-27 21:48:06.750428] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:43.732 [2024-11-27 21:48:06.750576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:43.732 [2024-11-27 21:48:06.750707] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:43.732 [2024-11-27 21:48:06.750756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000001900 00:15:43.732 [2024-11-27 21:48:06.750905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:43.732 BaseBdev2 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.732 [ 00:15:43.732 { 00:15:43.732 "name": "BaseBdev2", 00:15:43.732 "aliases": [ 00:15:43.732 "e3ed0ab4-50dd-452c-ba4a-e18c5c0f348c" 00:15:43.732 ], 00:15:43.732 "product_name": "Malloc disk", 00:15:43.732 "block_size": 4128, 00:15:43.732 "num_blocks": 8192, 00:15:43.732 "uuid": "e3ed0ab4-50dd-452c-ba4a-e18c5c0f348c", 00:15:43.732 "md_size": 32, 00:15:43.732 "md_interleave": true, 00:15:43.732 "dif_type": 0, 00:15:43.732 "assigned_rate_limits": { 00:15:43.732 "rw_ios_per_sec": 0, 00:15:43.732 "rw_mbytes_per_sec": 0, 00:15:43.732 "r_mbytes_per_sec": 0, 00:15:43.732 "w_mbytes_per_sec": 0 00:15:43.732 }, 00:15:43.732 "claimed": true, 00:15:43.732 "claim_type": "exclusive_write", 00:15:43.732 "zoned": false, 00:15:43.732 "supported_io_types": { 00:15:43.732 "read": true, 00:15:43.732 "write": true, 00:15:43.732 "unmap": true, 00:15:43.732 "flush": true, 00:15:43.732 "reset": true, 00:15:43.732 "nvme_admin": false, 00:15:43.732 "nvme_io": false, 00:15:43.732 "nvme_io_md": false, 00:15:43.732 "write_zeroes": true, 00:15:43.732 "zcopy": true, 00:15:43.732 "get_zone_info": false, 00:15:43.732 "zone_management": false, 00:15:43.732 "zone_append": false, 00:15:43.732 "compare": false, 00:15:43.732 "compare_and_write": false, 00:15:43.732 "abort": true, 00:15:43.732 "seek_hole": false, 00:15:43.732 "seek_data": false, 00:15:43.732 "copy": true, 00:15:43.732 "nvme_iov_md": false 00:15:43.732 }, 00:15:43.732 "memory_domains": [ 00:15:43.732 { 00:15:43.732 "dma_device_id": "system", 00:15:43.732 "dma_device_type": 1 00:15:43.732 }, 00:15:43.732 { 00:15:43.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.732 "dma_device_type": 2 00:15:43.732 } 00:15:43.732 ], 00:15:43.732 "driver_specific": {} 00:15:43.732 } 00:15:43.732 ] 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:43.732 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.733 "name": "Existed_Raid", 00:15:43.733 "uuid": "aa07e970-7504-4cea-8b32-4afeebb1202c", 00:15:43.733 "strip_size_kb": 0, 00:15:43.733 "state": "online", 00:15:43.733 "raid_level": "raid1", 00:15:43.733 "superblock": true, 00:15:43.733 "num_base_bdevs": 2, 00:15:43.733 "num_base_bdevs_discovered": 2, 00:15:43.733 "num_base_bdevs_operational": 2, 00:15:43.733 "base_bdevs_list": [ 00:15:43.733 { 00:15:43.733 "name": "BaseBdev1", 00:15:43.733 "uuid": "9e8fb738-cca0-4ace-8f11-7b25e54a10df", 00:15:43.733 "is_configured": true, 00:15:43.733 "data_offset": 256, 00:15:43.733 "data_size": 7936 00:15:43.733 }, 00:15:43.733 { 00:15:43.733 "name": "BaseBdev2", 00:15:43.733 "uuid": "e3ed0ab4-50dd-452c-ba4a-e18c5c0f348c", 00:15:43.733 "is_configured": true, 00:15:43.733 "data_offset": 256, 00:15:43.733 "data_size": 7936 00:15:43.733 } 00:15:43.733 ] 00:15:43.733 }' 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.733 21:48:06 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.302 [2024-11-27 21:48:07.237577] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:44.302 "name": "Existed_Raid", 00:15:44.302 "aliases": [ 00:15:44.302 "aa07e970-7504-4cea-8b32-4afeebb1202c" 00:15:44.302 ], 00:15:44.302 "product_name": "Raid Volume", 00:15:44.302 "block_size": 4128, 00:15:44.302 "num_blocks": 7936, 00:15:44.302 "uuid": "aa07e970-7504-4cea-8b32-4afeebb1202c", 00:15:44.302 "md_size": 32, 00:15:44.302 "md_interleave": true, 00:15:44.302 "dif_type": 0, 00:15:44.302 "assigned_rate_limits": { 00:15:44.302 "rw_ios_per_sec": 0, 00:15:44.302 "rw_mbytes_per_sec": 0, 00:15:44.302 "r_mbytes_per_sec": 0, 00:15:44.302 "w_mbytes_per_sec": 0 00:15:44.302 }, 00:15:44.302 "claimed": false, 00:15:44.302 "zoned": false, 00:15:44.302 "supported_io_types": { 00:15:44.302 "read": true, 00:15:44.302 "write": true, 00:15:44.302 "unmap": false, 00:15:44.302 "flush": false, 00:15:44.302 "reset": true, 00:15:44.302 "nvme_admin": false, 00:15:44.302 "nvme_io": false, 00:15:44.302 "nvme_io_md": false, 00:15:44.302 "write_zeroes": true, 00:15:44.302 "zcopy": false, 00:15:44.302 "get_zone_info": false, 00:15:44.302 "zone_management": false, 00:15:44.302 "zone_append": false, 00:15:44.302 "compare": false, 00:15:44.302 "compare_and_write": false, 00:15:44.302 "abort": false, 00:15:44.302 "seek_hole": false, 00:15:44.302 "seek_data": false, 00:15:44.302 "copy": false, 00:15:44.302 "nvme_iov_md": false 00:15:44.302 }, 00:15:44.302 "memory_domains": [ 00:15:44.302 { 00:15:44.302 "dma_device_id": "system", 00:15:44.302 "dma_device_type": 1 00:15:44.302 }, 00:15:44.302 { 00:15:44.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.302 "dma_device_type": 2 00:15:44.302 }, 00:15:44.302 { 00:15:44.302 "dma_device_id": "system", 00:15:44.302 "dma_device_type": 1 00:15:44.302 }, 00:15:44.302 { 00:15:44.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.302 "dma_device_type": 2 00:15:44.302 } 00:15:44.302 ], 00:15:44.302 "driver_specific": { 00:15:44.302 "raid": { 00:15:44.302 "uuid": "aa07e970-7504-4cea-8b32-4afeebb1202c", 00:15:44.302 "strip_size_kb": 0, 00:15:44.302 "state": "online", 00:15:44.302 "raid_level": "raid1", 00:15:44.302 "superblock": true, 00:15:44.302 "num_base_bdevs": 2, 00:15:44.302 "num_base_bdevs_discovered": 2, 00:15:44.302 "num_base_bdevs_operational": 2, 00:15:44.302 "base_bdevs_list": [ 00:15:44.302 { 00:15:44.302 "name": "BaseBdev1", 00:15:44.302 "uuid": "9e8fb738-cca0-4ace-8f11-7b25e54a10df", 00:15:44.302 "is_configured": true, 00:15:44.302 "data_offset": 256, 00:15:44.302 "data_size": 7936 00:15:44.302 }, 00:15:44.302 { 00:15:44.302 "name": "BaseBdev2", 00:15:44.302 "uuid": "e3ed0ab4-50dd-452c-ba4a-e18c5c0f348c", 00:15:44.302 "is_configured": true, 00:15:44.302 "data_offset": 256, 00:15:44.302 "data_size": 7936 00:15:44.302 } 00:15:44.302 ] 00:15:44.302 } 00:15:44.302 } 00:15:44.302 }' 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:44.302 BaseBdev2' 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.302 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.562 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.563 [2024-11-27 21:48:07.465006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.563 "name": "Existed_Raid", 00:15:44.563 "uuid": "aa07e970-7504-4cea-8b32-4afeebb1202c", 00:15:44.563 "strip_size_kb": 0, 00:15:44.563 "state": "online", 00:15:44.563 "raid_level": "raid1", 00:15:44.563 "superblock": true, 00:15:44.563 "num_base_bdevs": 2, 00:15:44.563 "num_base_bdevs_discovered": 1, 00:15:44.563 "num_base_bdevs_operational": 1, 00:15:44.563 "base_bdevs_list": [ 00:15:44.563 { 00:15:44.563 "name": null, 00:15:44.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.563 "is_configured": false, 00:15:44.563 "data_offset": 0, 00:15:44.563 "data_size": 7936 00:15:44.563 }, 00:15:44.563 { 00:15:44.563 "name": "BaseBdev2", 00:15:44.563 "uuid": "e3ed0ab4-50dd-452c-ba4a-e18c5c0f348c", 00:15:44.563 "is_configured": true, 00:15:44.563 "data_offset": 256, 00:15:44.563 "data_size": 7936 00:15:44.563 } 00:15:44.563 ] 00:15:44.563 }' 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.563 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.822 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:44.822 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:44.822 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.822 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.822 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:44.822 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:44.822 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.081 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:45.081 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:45.081 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:45.081 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.081 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:45.081 [2024-11-27 21:48:07.959915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:45.081 [2024-11-27 21:48:07.960048] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:45.081 [2024-11-27 21:48:07.972043] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.081 [2024-11-27 21:48:07.972184] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:45.081 [2024-11-27 21:48:07.972224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name Existed_Raid, state offline 00:15:45.081 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.081 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:45.082 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:45.082 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.082 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.082 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:45.082 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:45.082 21:48:07 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.082 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:45.082 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:45.082 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:15:45.082 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 98426 00:15:45.082 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 98426 ']' 00:15:45.082 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 98426 00:15:45.082 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:15:45.082 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.082 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98426 00:15:45.082 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.082 killing process with pid 98426 00:15:45.082 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.082 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98426' 00:15:45.082 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 98426 00:15:45.082 [2024-11-27 21:48:08.072430] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:45.082 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 98426 00:15:45.082 [2024-11-27 21:48:08.073402] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:45.341 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:15:45.341 00:15:45.341 real 0m3.882s 00:15:45.341 user 0m6.098s 00:15:45.341 sys 0m0.859s 00:15:45.341 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.341 ************************************ 00:15:45.341 END TEST raid_state_function_test_sb_md_interleaved 00:15:45.341 ************************************ 00:15:45.341 21:48:08 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:45.341 21:48:08 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:15:45.341 21:48:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:45.341 21:48:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.341 21:48:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:45.341 ************************************ 00:15:45.341 START TEST raid_superblock_test_md_interleaved 00:15:45.341 ************************************ 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=98667 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 98667 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 98667 ']' 00:15:45.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:45.341 21:48:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:45.600 [2024-11-27 21:48:08.466529] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:45.600 [2024-11-27 21:48:08.466674] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98667 ] 00:15:45.600 [2024-11-27 21:48:08.624483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:45.600 [2024-11-27 21:48:08.650634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.600 [2024-11-27 21:48:08.693825] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:45.600 [2024-11-27 21:48:08.693865] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:46.180 malloc1 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:46.180 [2024-11-27 21:48:09.289429] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:46.180 [2024-11-27 21:48:09.289584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.180 [2024-11-27 21:48:09.289627] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:46.180 [2024-11-27 21:48:09.289658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.180 [2024-11-27 21:48:09.291486] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.180 [2024-11-27 21:48:09.291561] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:46.180 pt1 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.180 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:46.441 malloc2 00:15:46.441 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.441 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:46.441 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.441 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:46.441 [2024-11-27 21:48:09.322243] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:46.442 [2024-11-27 21:48:09.322295] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:46.442 [2024-11-27 21:48:09.322310] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:46.442 [2024-11-27 21:48:09.322319] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:46.442 [2024-11-27 21:48:09.324065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:46.442 [2024-11-27 21:48:09.324180] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:46.442 pt2 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:46.442 [2024-11-27 21:48:09.334252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:46.442 [2024-11-27 21:48:09.336029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:46.442 [2024-11-27 21:48:09.336184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:46.442 [2024-11-27 21:48:09.336201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:46.442 [2024-11-27 21:48:09.336284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002390 00:15:46.442 [2024-11-27 21:48:09.336351] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:46.442 [2024-11-27 21:48:09.336362] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:46.442 [2024-11-27 21:48:09.336429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.442 "name": "raid_bdev1", 00:15:46.442 "uuid": "a425db8c-d8e2-43c7-b0ec-7dac053b5d4e", 00:15:46.442 "strip_size_kb": 0, 00:15:46.442 "state": "online", 00:15:46.442 "raid_level": "raid1", 00:15:46.442 "superblock": true, 00:15:46.442 "num_base_bdevs": 2, 00:15:46.442 "num_base_bdevs_discovered": 2, 00:15:46.442 "num_base_bdevs_operational": 2, 00:15:46.442 "base_bdevs_list": [ 00:15:46.442 { 00:15:46.442 "name": "pt1", 00:15:46.442 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:46.442 "is_configured": true, 00:15:46.442 "data_offset": 256, 00:15:46.442 "data_size": 7936 00:15:46.442 }, 00:15:46.442 { 00:15:46.442 "name": "pt2", 00:15:46.442 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.442 "is_configured": true, 00:15:46.442 "data_offset": 256, 00:15:46.442 "data_size": 7936 00:15:46.442 } 00:15:46.442 ] 00:15:46.442 }' 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.442 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:46.701 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:46.701 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:46.701 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:46.701 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:46.701 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:46.701 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:46.701 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:46.701 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:46.701 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.701 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:46.701 [2024-11-27 21:48:09.809702] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:46.961 "name": "raid_bdev1", 00:15:46.961 "aliases": [ 00:15:46.961 "a425db8c-d8e2-43c7-b0ec-7dac053b5d4e" 00:15:46.961 ], 00:15:46.961 "product_name": "Raid Volume", 00:15:46.961 "block_size": 4128, 00:15:46.961 "num_blocks": 7936, 00:15:46.961 "uuid": "a425db8c-d8e2-43c7-b0ec-7dac053b5d4e", 00:15:46.961 "md_size": 32, 00:15:46.961 "md_interleave": true, 00:15:46.961 "dif_type": 0, 00:15:46.961 "assigned_rate_limits": { 00:15:46.961 "rw_ios_per_sec": 0, 00:15:46.961 "rw_mbytes_per_sec": 0, 00:15:46.961 "r_mbytes_per_sec": 0, 00:15:46.961 "w_mbytes_per_sec": 0 00:15:46.961 }, 00:15:46.961 "claimed": false, 00:15:46.961 "zoned": false, 00:15:46.961 "supported_io_types": { 00:15:46.961 "read": true, 00:15:46.961 "write": true, 00:15:46.961 "unmap": false, 00:15:46.961 "flush": false, 00:15:46.961 "reset": true, 00:15:46.961 "nvme_admin": false, 00:15:46.961 "nvme_io": false, 00:15:46.961 "nvme_io_md": false, 00:15:46.961 "write_zeroes": true, 00:15:46.961 "zcopy": false, 00:15:46.961 "get_zone_info": false, 00:15:46.961 "zone_management": false, 00:15:46.961 "zone_append": false, 00:15:46.961 "compare": false, 00:15:46.961 "compare_and_write": false, 00:15:46.961 "abort": false, 00:15:46.961 "seek_hole": false, 00:15:46.961 "seek_data": false, 00:15:46.961 "copy": false, 00:15:46.961 "nvme_iov_md": false 00:15:46.961 }, 00:15:46.961 "memory_domains": [ 00:15:46.961 { 00:15:46.961 "dma_device_id": "system", 00:15:46.961 "dma_device_type": 1 00:15:46.961 }, 00:15:46.961 { 00:15:46.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.961 "dma_device_type": 2 00:15:46.961 }, 00:15:46.961 { 00:15:46.961 "dma_device_id": "system", 00:15:46.961 "dma_device_type": 1 00:15:46.961 }, 00:15:46.961 { 00:15:46.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.961 "dma_device_type": 2 00:15:46.961 } 00:15:46.961 ], 00:15:46.961 "driver_specific": { 00:15:46.961 "raid": { 00:15:46.961 "uuid": "a425db8c-d8e2-43c7-b0ec-7dac053b5d4e", 00:15:46.961 "strip_size_kb": 0, 00:15:46.961 "state": "online", 00:15:46.961 "raid_level": "raid1", 00:15:46.961 "superblock": true, 00:15:46.961 "num_base_bdevs": 2, 00:15:46.961 "num_base_bdevs_discovered": 2, 00:15:46.961 "num_base_bdevs_operational": 2, 00:15:46.961 "base_bdevs_list": [ 00:15:46.961 { 00:15:46.961 "name": "pt1", 00:15:46.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:46.961 "is_configured": true, 00:15:46.961 "data_offset": 256, 00:15:46.961 "data_size": 7936 00:15:46.961 }, 00:15:46.961 { 00:15:46.961 "name": "pt2", 00:15:46.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:46.961 "is_configured": true, 00:15:46.961 "data_offset": 256, 00:15:46.961 "data_size": 7936 00:15:46.961 } 00:15:46.961 ] 00:15:46.961 } 00:15:46.961 } 00:15:46.961 }' 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:46.961 pt2' 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:46.961 21:48:09 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:46.961 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.961 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:46.961 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:46.961 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:46.961 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:46.961 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.961 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:46.961 [2024-11-27 21:48:10.041226] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:46.961 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.961 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a425db8c-d8e2-43c7-b0ec-7dac053b5d4e 00:15:46.961 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z a425db8c-d8e2-43c7-b0ec-7dac053b5d4e ']' 00:15:46.961 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:46.961 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.961 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.222 [2024-11-27 21:48:10.084962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.222 [2024-11-27 21:48:10.085029] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.222 [2024-11-27 21:48:10.085109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.222 [2024-11-27 21:48:10.085190] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.222 [2024-11-27 21:48:10.085234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.222 [2024-11-27 21:48:10.208756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:47.222 [2024-11-27 21:48:10.210490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:47.222 [2024-11-27 21:48:10.210545] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:47.222 [2024-11-27 21:48:10.210590] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:47.222 [2024-11-27 21:48:10.210608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:47.222 [2024-11-27 21:48:10.210615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state configuring 00:15:47.222 request: 00:15:47.222 { 00:15:47.222 "name": "raid_bdev1", 00:15:47.222 "raid_level": "raid1", 00:15:47.222 "base_bdevs": [ 00:15:47.222 "malloc1", 00:15:47.222 "malloc2" 00:15:47.222 ], 00:15:47.222 "superblock": false, 00:15:47.222 "method": "bdev_raid_create", 00:15:47.222 "req_id": 1 00:15:47.222 } 00:15:47.222 Got JSON-RPC error response 00:15:47.222 response: 00:15:47.222 { 00:15:47.222 "code": -17, 00:15:47.222 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:47.222 } 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.222 [2024-11-27 21:48:10.276608] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:47.222 [2024-11-27 21:48:10.276703] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.222 [2024-11-27 21:48:10.276735] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:47.222 [2024-11-27 21:48:10.276758] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.222 [2024-11-27 21:48:10.278544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.222 [2024-11-27 21:48:10.278609] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:47.222 [2024-11-27 21:48:10.278665] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:47.222 [2024-11-27 21:48:10.278706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:47.222 pt1 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.222 "name": "raid_bdev1", 00:15:47.222 "uuid": "a425db8c-d8e2-43c7-b0ec-7dac053b5d4e", 00:15:47.222 "strip_size_kb": 0, 00:15:47.222 "state": "configuring", 00:15:47.222 "raid_level": "raid1", 00:15:47.222 "superblock": true, 00:15:47.222 "num_base_bdevs": 2, 00:15:47.222 "num_base_bdevs_discovered": 1, 00:15:47.222 "num_base_bdevs_operational": 2, 00:15:47.222 "base_bdevs_list": [ 00:15:47.222 { 00:15:47.222 "name": "pt1", 00:15:47.222 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.222 "is_configured": true, 00:15:47.222 "data_offset": 256, 00:15:47.222 "data_size": 7936 00:15:47.222 }, 00:15:47.222 { 00:15:47.222 "name": null, 00:15:47.222 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.222 "is_configured": false, 00:15:47.222 "data_offset": 256, 00:15:47.222 "data_size": 7936 00:15:47.222 } 00:15:47.222 ] 00:15:47.222 }' 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.222 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.791 [2024-11-27 21:48:10.744037] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:47.791 [2024-11-27 21:48:10.744146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:47.791 [2024-11-27 21:48:10.744167] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:47.791 [2024-11-27 21:48:10.744175] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:47.791 [2024-11-27 21:48:10.744330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:47.791 [2024-11-27 21:48:10.744345] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:47.791 [2024-11-27 21:48:10.744384] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:47.791 [2024-11-27 21:48:10.744400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:47.791 [2024-11-27 21:48:10.744468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001900 00:15:47.791 [2024-11-27 21:48:10.744475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:47.791 [2024-11-27 21:48:10.744538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:47.791 [2024-11-27 21:48:10.744588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001900 00:15:47.791 [2024-11-27 21:48:10.744600] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001900 00:15:47.791 [2024-11-27 21:48:10.744645] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.791 pt2 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.791 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.791 "name": "raid_bdev1", 00:15:47.791 "uuid": "a425db8c-d8e2-43c7-b0ec-7dac053b5d4e", 00:15:47.791 "strip_size_kb": 0, 00:15:47.791 "state": "online", 00:15:47.791 "raid_level": "raid1", 00:15:47.791 "superblock": true, 00:15:47.791 "num_base_bdevs": 2, 00:15:47.791 "num_base_bdevs_discovered": 2, 00:15:47.791 "num_base_bdevs_operational": 2, 00:15:47.791 "base_bdevs_list": [ 00:15:47.791 { 00:15:47.791 "name": "pt1", 00:15:47.791 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:47.791 "is_configured": true, 00:15:47.791 "data_offset": 256, 00:15:47.791 "data_size": 7936 00:15:47.791 }, 00:15:47.791 { 00:15:47.791 "name": "pt2", 00:15:47.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:47.792 "is_configured": true, 00:15:47.792 "data_offset": 256, 00:15:47.792 "data_size": 7936 00:15:47.792 } 00:15:47.792 ] 00:15:47.792 }' 00:15:47.792 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.792 21:48:10 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.360 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:48.360 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:48.360 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:48.360 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:48.360 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:15:48.360 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:48.360 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.360 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:48.360 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.360 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.360 [2024-11-27 21:48:11.227425] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.360 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:48.361 "name": "raid_bdev1", 00:15:48.361 "aliases": [ 00:15:48.361 "a425db8c-d8e2-43c7-b0ec-7dac053b5d4e" 00:15:48.361 ], 00:15:48.361 "product_name": "Raid Volume", 00:15:48.361 "block_size": 4128, 00:15:48.361 "num_blocks": 7936, 00:15:48.361 "uuid": "a425db8c-d8e2-43c7-b0ec-7dac053b5d4e", 00:15:48.361 "md_size": 32, 00:15:48.361 "md_interleave": true, 00:15:48.361 "dif_type": 0, 00:15:48.361 "assigned_rate_limits": { 00:15:48.361 "rw_ios_per_sec": 0, 00:15:48.361 "rw_mbytes_per_sec": 0, 00:15:48.361 "r_mbytes_per_sec": 0, 00:15:48.361 "w_mbytes_per_sec": 0 00:15:48.361 }, 00:15:48.361 "claimed": false, 00:15:48.361 "zoned": false, 00:15:48.361 "supported_io_types": { 00:15:48.361 "read": true, 00:15:48.361 "write": true, 00:15:48.361 "unmap": false, 00:15:48.361 "flush": false, 00:15:48.361 "reset": true, 00:15:48.361 "nvme_admin": false, 00:15:48.361 "nvme_io": false, 00:15:48.361 "nvme_io_md": false, 00:15:48.361 "write_zeroes": true, 00:15:48.361 "zcopy": false, 00:15:48.361 "get_zone_info": false, 00:15:48.361 "zone_management": false, 00:15:48.361 "zone_append": false, 00:15:48.361 "compare": false, 00:15:48.361 "compare_and_write": false, 00:15:48.361 "abort": false, 00:15:48.361 "seek_hole": false, 00:15:48.361 "seek_data": false, 00:15:48.361 "copy": false, 00:15:48.361 "nvme_iov_md": false 00:15:48.361 }, 00:15:48.361 "memory_domains": [ 00:15:48.361 { 00:15:48.361 "dma_device_id": "system", 00:15:48.361 "dma_device_type": 1 00:15:48.361 }, 00:15:48.361 { 00:15:48.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.361 "dma_device_type": 2 00:15:48.361 }, 00:15:48.361 { 00:15:48.361 "dma_device_id": "system", 00:15:48.361 "dma_device_type": 1 00:15:48.361 }, 00:15:48.361 { 00:15:48.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.361 "dma_device_type": 2 00:15:48.361 } 00:15:48.361 ], 00:15:48.361 "driver_specific": { 00:15:48.361 "raid": { 00:15:48.361 "uuid": "a425db8c-d8e2-43c7-b0ec-7dac053b5d4e", 00:15:48.361 "strip_size_kb": 0, 00:15:48.361 "state": "online", 00:15:48.361 "raid_level": "raid1", 00:15:48.361 "superblock": true, 00:15:48.361 "num_base_bdevs": 2, 00:15:48.361 "num_base_bdevs_discovered": 2, 00:15:48.361 "num_base_bdevs_operational": 2, 00:15:48.361 "base_bdevs_list": [ 00:15:48.361 { 00:15:48.361 "name": "pt1", 00:15:48.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.361 "is_configured": true, 00:15:48.361 "data_offset": 256, 00:15:48.361 "data_size": 7936 00:15:48.361 }, 00:15:48.361 { 00:15:48.361 "name": "pt2", 00:15:48.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.361 "is_configured": true, 00:15:48.361 "data_offset": 256, 00:15:48.361 "data_size": 7936 00:15:48.361 } 00:15:48.361 ] 00:15:48.361 } 00:15:48.361 } 00:15:48.361 }' 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:48.361 pt2' 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.361 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.361 [2024-11-27 21:48:11.479041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' a425db8c-d8e2-43c7-b0ec-7dac053b5d4e '!=' a425db8c-d8e2-43c7-b0ec-7dac053b5d4e ']' 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.621 [2024-11-27 21:48:11.510771] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.621 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.621 "name": "raid_bdev1", 00:15:48.621 "uuid": "a425db8c-d8e2-43c7-b0ec-7dac053b5d4e", 00:15:48.621 "strip_size_kb": 0, 00:15:48.621 "state": "online", 00:15:48.621 "raid_level": "raid1", 00:15:48.621 "superblock": true, 00:15:48.621 "num_base_bdevs": 2, 00:15:48.621 "num_base_bdevs_discovered": 1, 00:15:48.621 "num_base_bdevs_operational": 1, 00:15:48.621 "base_bdevs_list": [ 00:15:48.621 { 00:15:48.621 "name": null, 00:15:48.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.621 "is_configured": false, 00:15:48.621 "data_offset": 0, 00:15:48.621 "data_size": 7936 00:15:48.621 }, 00:15:48.621 { 00:15:48.621 "name": "pt2", 00:15:48.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.621 "is_configured": true, 00:15:48.621 "data_offset": 256, 00:15:48.621 "data_size": 7936 00:15:48.621 } 00:15:48.621 ] 00:15:48.621 }' 00:15:48.622 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.622 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.881 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:48.881 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.881 21:48:11 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:48.881 [2024-11-27 21:48:11.997873] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.881 [2024-11-27 21:48:11.997945] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.881 [2024-11-27 21:48:11.998017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.881 [2024-11-27 21:48:11.998072] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.881 [2024-11-27 21:48:11.998104] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001900 name raid_bdev1, state offline 00:15:49.141 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.141 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.141 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.141 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:49.141 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.141 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.141 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:49.141 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.142 [2024-11-27 21:48:12.073728] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:49.142 [2024-11-27 21:48:12.073774] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.142 [2024-11-27 21:48:12.073789] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008780 00:15:49.142 [2024-11-27 21:48:12.073809] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.142 [2024-11-27 21:48:12.075568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.142 [2024-11-27 21:48:12.075604] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:49.142 [2024-11-27 21:48:12.075646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:49.142 [2024-11-27 21:48:12.075678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.142 [2024-11-27 21:48:12.075732] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001c80 00:15:49.142 [2024-11-27 21:48:12.075739] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:49.142 [2024-11-27 21:48:12.075815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:49.142 [2024-11-27 21:48:12.075865] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001c80 00:15:49.142 [2024-11-27 21:48:12.075873] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001c80 00:15:49.142 [2024-11-27 21:48:12.075930] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.142 pt2 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.142 "name": "raid_bdev1", 00:15:49.142 "uuid": "a425db8c-d8e2-43c7-b0ec-7dac053b5d4e", 00:15:49.142 "strip_size_kb": 0, 00:15:49.142 "state": "online", 00:15:49.142 "raid_level": "raid1", 00:15:49.142 "superblock": true, 00:15:49.142 "num_base_bdevs": 2, 00:15:49.142 "num_base_bdevs_discovered": 1, 00:15:49.142 "num_base_bdevs_operational": 1, 00:15:49.142 "base_bdevs_list": [ 00:15:49.142 { 00:15:49.142 "name": null, 00:15:49.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.142 "is_configured": false, 00:15:49.142 "data_offset": 256, 00:15:49.142 "data_size": 7936 00:15:49.142 }, 00:15:49.142 { 00:15:49.142 "name": "pt2", 00:15:49.142 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.142 "is_configured": true, 00:15:49.142 "data_offset": 256, 00:15:49.142 "data_size": 7936 00:15:49.142 } 00:15:49.142 ] 00:15:49.142 }' 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.142 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.402 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:49.402 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.402 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.402 [2024-11-27 21:48:12.489020] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.402 [2024-11-27 21:48:12.489088] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.402 [2024-11-27 21:48:12.489161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.402 [2024-11-27 21:48:12.489214] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.402 [2024-11-27 21:48:12.489276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001c80 name raid_bdev1, state offline 00:15:49.402 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.402 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.402 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.402 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.402 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:49.402 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.662 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:49.662 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:49.662 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:15:49.662 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:49.662 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.662 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.662 [2024-11-27 21:48:12.552924] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:49.662 [2024-11-27 21:48:12.552980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.662 [2024-11-27 21:48:12.552993] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:15:49.662 [2024-11-27 21:48:12.553006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.662 [2024-11-27 21:48:12.554769] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.662 [2024-11-27 21:48:12.554822] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:49.662 [2024-11-27 21:48:12.554861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:49.662 [2024-11-27 21:48:12.554889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:49.662 [2024-11-27 21:48:12.554952] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:49.662 [2024-11-27 21:48:12.554971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.662 [2024-11-27 21:48:12.554994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002000 name raid_bdev1, state configuring 00:15:49.663 [2024-11-27 21:48:12.555027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:49.663 [2024-11-27 21:48:12.555076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000002380 00:15:49.663 [2024-11-27 21:48:12.555085] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:49.663 [2024-11-27 21:48:12.555159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:49.663 [2024-11-27 21:48:12.555207] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000002380 00:15:49.663 [2024-11-27 21:48:12.555213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000002380 00:15:49.663 [2024-11-27 21:48:12.555268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.663 pt1 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.663 "name": "raid_bdev1", 00:15:49.663 "uuid": "a425db8c-d8e2-43c7-b0ec-7dac053b5d4e", 00:15:49.663 "strip_size_kb": 0, 00:15:49.663 "state": "online", 00:15:49.663 "raid_level": "raid1", 00:15:49.663 "superblock": true, 00:15:49.663 "num_base_bdevs": 2, 00:15:49.663 "num_base_bdevs_discovered": 1, 00:15:49.663 "num_base_bdevs_operational": 1, 00:15:49.663 "base_bdevs_list": [ 00:15:49.663 { 00:15:49.663 "name": null, 00:15:49.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.663 "is_configured": false, 00:15:49.663 "data_offset": 256, 00:15:49.663 "data_size": 7936 00:15:49.663 }, 00:15:49.663 { 00:15:49.663 "name": "pt2", 00:15:49.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.663 "is_configured": true, 00:15:49.663 "data_offset": 256, 00:15:49.663 "data_size": 7936 00:15:49.663 } 00:15:49.663 ] 00:15:49.663 }' 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.663 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.923 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:49.923 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:49.923 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.923 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.923 21:48:12 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.923 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:49.923 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.923 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:49.923 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.923 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:49.923 [2024-11-27 21:48:13.016374] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.923 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.183 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' a425db8c-d8e2-43c7-b0ec-7dac053b5d4e '!=' a425db8c-d8e2-43c7-b0ec-7dac053b5d4e ']' 00:15:50.183 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 98667 00:15:50.183 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 98667 ']' 00:15:50.183 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 98667 00:15:50.183 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:15:50.183 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:50.183 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98667 00:15:50.183 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:50.183 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:50.183 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98667' 00:15:50.183 killing process with pid 98667 00:15:50.183 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 98667 00:15:50.183 [2024-11-27 21:48:13.097226] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:50.183 [2024-11-27 21:48:13.097281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.183 [2024-11-27 21:48:13.097318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.183 [2024-11-27 21:48:13.097326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000002380 name raid_bdev1, state offline 00:15:50.183 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 98667 00:15:50.183 [2024-11-27 21:48:13.121300] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:50.444 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:15:50.444 00:15:50.444 real 0m4.963s 00:15:50.444 user 0m8.117s 00:15:50.444 sys 0m1.110s 00:15:50.444 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.444 21:48:13 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.444 ************************************ 00:15:50.444 END TEST raid_superblock_test_md_interleaved 00:15:50.444 ************************************ 00:15:50.444 21:48:13 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:15:50.444 21:48:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:50.444 21:48:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.444 21:48:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:50.444 ************************************ 00:15:50.444 START TEST raid_rebuild_test_sb_md_interleaved 00:15:50.444 ************************************ 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=98979 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 98979 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 98979 ']' 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.444 21:48:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:50.444 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:50.444 Zero copy mechanism will not be used. 00:15:50.444 [2024-11-27 21:48:13.530770] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:15:50.444 [2024-11-27 21:48:13.530955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98979 ] 00:15:50.705 [2024-11-27 21:48:13.690247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.705 [2024-11-27 21:48:13.716349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.705 [2024-11-27 21:48:13.760195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.705 [2024-11-27 21:48:13.760231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.274 BaseBdev1_malloc 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.274 [2024-11-27 21:48:14.360164] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:51.274 [2024-11-27 21:48:14.360288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.274 [2024-11-27 21:48:14.360319] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000006680 00:15:51.274 [2024-11-27 21:48:14.360329] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.274 [2024-11-27 21:48:14.362177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.274 [2024-11-27 21:48:14.362211] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:51.274 BaseBdev1 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.274 BaseBdev2_malloc 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.274 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.274 [2024-11-27 21:48:14.388983] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:51.274 [2024-11-27 21:48:14.389034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.274 [2024-11-27 21:48:14.389053] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:51.274 [2024-11-27 21:48:14.389064] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.274 [2024-11-27 21:48:14.390956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.274 [2024-11-27 21:48:14.391051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:51.534 BaseBdev2 00:15:51.534 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.534 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:15:51.534 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.534 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.534 spare_malloc 00:15:51.534 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.534 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:51.534 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.534 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.534 spare_delay 00:15:51.534 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.534 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.535 [2024-11-27 21:48:14.447376] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:51.535 [2024-11-27 21:48:14.447448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.535 [2024-11-27 21:48:14.447481] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:15:51.535 [2024-11-27 21:48:14.447496] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.535 [2024-11-27 21:48:14.450560] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.535 [2024-11-27 21:48:14.450612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:51.535 spare 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.535 [2024-11-27 21:48:14.459450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.535 [2024-11-27 21:48:14.461556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.535 [2024-11-27 21:48:14.461810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001200 00:15:51.535 [2024-11-27 21:48:14.461829] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:51.535 [2024-11-27 21:48:14.461941] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002460 00:15:51.535 [2024-11-27 21:48:14.462018] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001200 00:15:51.535 [2024-11-27 21:48:14.462034] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001200 00:15:51.535 [2024-11-27 21:48:14.462111] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.535 "name": "raid_bdev1", 00:15:51.535 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:15:51.535 "strip_size_kb": 0, 00:15:51.535 "state": "online", 00:15:51.535 "raid_level": "raid1", 00:15:51.535 "superblock": true, 00:15:51.535 "num_base_bdevs": 2, 00:15:51.535 "num_base_bdevs_discovered": 2, 00:15:51.535 "num_base_bdevs_operational": 2, 00:15:51.535 "base_bdevs_list": [ 00:15:51.535 { 00:15:51.535 "name": "BaseBdev1", 00:15:51.535 "uuid": "7437b758-cae8-5e81-82e4-5201fa7d0039", 00:15:51.535 "is_configured": true, 00:15:51.535 "data_offset": 256, 00:15:51.535 "data_size": 7936 00:15:51.535 }, 00:15:51.535 { 00:15:51.535 "name": "BaseBdev2", 00:15:51.535 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:15:51.535 "is_configured": true, 00:15:51.535 "data_offset": 256, 00:15:51.535 "data_size": 7936 00:15:51.535 } 00:15:51.535 ] 00:15:51.535 }' 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.535 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.103 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:52.103 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:52.103 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.103 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.103 [2024-11-27 21:48:14.946842] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:52.103 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.103 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:15:52.103 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.103 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.103 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:52.103 21:48:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.103 [2024-11-27 21:48:15.034433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.103 "name": "raid_bdev1", 00:15:52.103 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:15:52.103 "strip_size_kb": 0, 00:15:52.103 "state": "online", 00:15:52.103 "raid_level": "raid1", 00:15:52.103 "superblock": true, 00:15:52.103 "num_base_bdevs": 2, 00:15:52.103 "num_base_bdevs_discovered": 1, 00:15:52.103 "num_base_bdevs_operational": 1, 00:15:52.103 "base_bdevs_list": [ 00:15:52.103 { 00:15:52.103 "name": null, 00:15:52.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.103 "is_configured": false, 00:15:52.103 "data_offset": 0, 00:15:52.103 "data_size": 7936 00:15:52.103 }, 00:15:52.103 { 00:15:52.103 "name": "BaseBdev2", 00:15:52.103 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:15:52.103 "is_configured": true, 00:15:52.103 "data_offset": 256, 00:15:52.103 "data_size": 7936 00:15:52.103 } 00:15:52.103 ] 00:15:52.103 }' 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.103 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.363 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:52.363 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.363 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:52.363 [2024-11-27 21:48:15.465794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:52.363 [2024-11-27 21:48:15.469456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002530 00:15:52.363 [2024-11-27 21:48:15.471279] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:52.363 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.363 21:48:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.742 "name": "raid_bdev1", 00:15:53.742 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:15:53.742 "strip_size_kb": 0, 00:15:53.742 "state": "online", 00:15:53.742 "raid_level": "raid1", 00:15:53.742 "superblock": true, 00:15:53.742 "num_base_bdevs": 2, 00:15:53.742 "num_base_bdevs_discovered": 2, 00:15:53.742 "num_base_bdevs_operational": 2, 00:15:53.742 "process": { 00:15:53.742 "type": "rebuild", 00:15:53.742 "target": "spare", 00:15:53.742 "progress": { 00:15:53.742 "blocks": 2560, 00:15:53.742 "percent": 32 00:15:53.742 } 00:15:53.742 }, 00:15:53.742 "base_bdevs_list": [ 00:15:53.742 { 00:15:53.742 "name": "spare", 00:15:53.742 "uuid": "08a44727-63ea-52a3-ab94-f1e66fce1db7", 00:15:53.742 "is_configured": true, 00:15:53.742 "data_offset": 256, 00:15:53.742 "data_size": 7936 00:15:53.742 }, 00:15:53.742 { 00:15:53.742 "name": "BaseBdev2", 00:15:53.742 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:15:53.742 "is_configured": true, 00:15:53.742 "data_offset": 256, 00:15:53.742 "data_size": 7936 00:15:53.742 } 00:15:53.742 ] 00:15:53.742 }' 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.742 [2024-11-27 21:48:16.634604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.742 [2024-11-27 21:48:16.675995] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:53.742 [2024-11-27 21:48:16.676109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.742 [2024-11-27 21:48:16.676129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:53.742 [2024-11-27 21:48:16.676137] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.742 "name": "raid_bdev1", 00:15:53.742 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:15:53.742 "strip_size_kb": 0, 00:15:53.742 "state": "online", 00:15:53.742 "raid_level": "raid1", 00:15:53.742 "superblock": true, 00:15:53.742 "num_base_bdevs": 2, 00:15:53.742 "num_base_bdevs_discovered": 1, 00:15:53.742 "num_base_bdevs_operational": 1, 00:15:53.742 "base_bdevs_list": [ 00:15:53.742 { 00:15:53.742 "name": null, 00:15:53.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.742 "is_configured": false, 00:15:53.742 "data_offset": 0, 00:15:53.742 "data_size": 7936 00:15:53.742 }, 00:15:53.742 { 00:15:53.742 "name": "BaseBdev2", 00:15:53.742 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:15:53.742 "is_configured": true, 00:15:53.742 "data_offset": 256, 00:15:53.742 "data_size": 7936 00:15:53.742 } 00:15:53.742 ] 00:15:53.742 }' 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.742 21:48:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.325 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.325 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.325 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.325 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.325 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.325 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.325 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.325 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.325 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.325 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.325 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.325 "name": "raid_bdev1", 00:15:54.325 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:15:54.325 "strip_size_kb": 0, 00:15:54.325 "state": "online", 00:15:54.325 "raid_level": "raid1", 00:15:54.325 "superblock": true, 00:15:54.325 "num_base_bdevs": 2, 00:15:54.325 "num_base_bdevs_discovered": 1, 00:15:54.325 "num_base_bdevs_operational": 1, 00:15:54.325 "base_bdevs_list": [ 00:15:54.325 { 00:15:54.325 "name": null, 00:15:54.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.325 "is_configured": false, 00:15:54.325 "data_offset": 0, 00:15:54.325 "data_size": 7936 00:15:54.325 }, 00:15:54.325 { 00:15:54.325 "name": "BaseBdev2", 00:15:54.325 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:15:54.325 "is_configured": true, 00:15:54.325 "data_offset": 256, 00:15:54.325 "data_size": 7936 00:15:54.325 } 00:15:54.325 ] 00:15:54.325 }' 00:15:54.325 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.326 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.326 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.326 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.326 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:54.326 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.326 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:54.326 [2024-11-27 21:48:17.295000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:54.326 [2024-11-27 21:48:17.298534] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002600 00:15:54.326 [2024-11-27 21:48:17.300387] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:54.326 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.326 21:48:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:55.309 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.310 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.310 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.310 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.310 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.310 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.310 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.310 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.310 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.310 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.310 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.310 "name": "raid_bdev1", 00:15:55.310 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:15:55.310 "strip_size_kb": 0, 00:15:55.310 "state": "online", 00:15:55.310 "raid_level": "raid1", 00:15:55.310 "superblock": true, 00:15:55.310 "num_base_bdevs": 2, 00:15:55.310 "num_base_bdevs_discovered": 2, 00:15:55.310 "num_base_bdevs_operational": 2, 00:15:55.310 "process": { 00:15:55.310 "type": "rebuild", 00:15:55.310 "target": "spare", 00:15:55.310 "progress": { 00:15:55.310 "blocks": 2560, 00:15:55.310 "percent": 32 00:15:55.310 } 00:15:55.310 }, 00:15:55.310 "base_bdevs_list": [ 00:15:55.310 { 00:15:55.310 "name": "spare", 00:15:55.310 "uuid": "08a44727-63ea-52a3-ab94-f1e66fce1db7", 00:15:55.310 "is_configured": true, 00:15:55.310 "data_offset": 256, 00:15:55.310 "data_size": 7936 00:15:55.310 }, 00:15:55.310 { 00:15:55.310 "name": "BaseBdev2", 00:15:55.310 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:15:55.310 "is_configured": true, 00:15:55.310 "data_offset": 256, 00:15:55.310 "data_size": 7936 00:15:55.310 } 00:15:55.310 ] 00:15:55.310 }' 00:15:55.310 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.310 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.310 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:55.570 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=607 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.570 "name": "raid_bdev1", 00:15:55.570 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:15:55.570 "strip_size_kb": 0, 00:15:55.570 "state": "online", 00:15:55.570 "raid_level": "raid1", 00:15:55.570 "superblock": true, 00:15:55.570 "num_base_bdevs": 2, 00:15:55.570 "num_base_bdevs_discovered": 2, 00:15:55.570 "num_base_bdevs_operational": 2, 00:15:55.570 "process": { 00:15:55.570 "type": "rebuild", 00:15:55.570 "target": "spare", 00:15:55.570 "progress": { 00:15:55.570 "blocks": 2816, 00:15:55.570 "percent": 35 00:15:55.570 } 00:15:55.570 }, 00:15:55.570 "base_bdevs_list": [ 00:15:55.570 { 00:15:55.570 "name": "spare", 00:15:55.570 "uuid": "08a44727-63ea-52a3-ab94-f1e66fce1db7", 00:15:55.570 "is_configured": true, 00:15:55.570 "data_offset": 256, 00:15:55.570 "data_size": 7936 00:15:55.570 }, 00:15:55.570 { 00:15:55.570 "name": "BaseBdev2", 00:15:55.570 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:15:55.570 "is_configured": true, 00:15:55.570 "data_offset": 256, 00:15:55.570 "data_size": 7936 00:15:55.570 } 00:15:55.570 ] 00:15:55.570 }' 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.570 21:48:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:56.508 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:56.508 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.508 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.508 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.508 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.508 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.508 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.508 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.508 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.508 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:56.508 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.768 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.768 "name": "raid_bdev1", 00:15:56.768 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:15:56.768 "strip_size_kb": 0, 00:15:56.768 "state": "online", 00:15:56.768 "raid_level": "raid1", 00:15:56.768 "superblock": true, 00:15:56.768 "num_base_bdevs": 2, 00:15:56.768 "num_base_bdevs_discovered": 2, 00:15:56.768 "num_base_bdevs_operational": 2, 00:15:56.768 "process": { 00:15:56.768 "type": "rebuild", 00:15:56.768 "target": "spare", 00:15:56.768 "progress": { 00:15:56.768 "blocks": 5888, 00:15:56.768 "percent": 74 00:15:56.768 } 00:15:56.768 }, 00:15:56.768 "base_bdevs_list": [ 00:15:56.768 { 00:15:56.768 "name": "spare", 00:15:56.768 "uuid": "08a44727-63ea-52a3-ab94-f1e66fce1db7", 00:15:56.768 "is_configured": true, 00:15:56.768 "data_offset": 256, 00:15:56.768 "data_size": 7936 00:15:56.768 }, 00:15:56.768 { 00:15:56.768 "name": "BaseBdev2", 00:15:56.768 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:15:56.768 "is_configured": true, 00:15:56.768 "data_offset": 256, 00:15:56.768 "data_size": 7936 00:15:56.768 } 00:15:56.768 ] 00:15:56.768 }' 00:15:56.768 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.768 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.768 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.768 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.768 21:48:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.338 [2024-11-27 21:48:20.411070] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:57.338 [2024-11-27 21:48:20.411195] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:57.338 [2024-11-27 21:48:20.411351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.933 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.933 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.933 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.933 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.933 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.933 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.933 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.933 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.933 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.933 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.933 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.933 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.933 "name": "raid_bdev1", 00:15:57.933 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:15:57.933 "strip_size_kb": 0, 00:15:57.933 "state": "online", 00:15:57.933 "raid_level": "raid1", 00:15:57.933 "superblock": true, 00:15:57.933 "num_base_bdevs": 2, 00:15:57.933 "num_base_bdevs_discovered": 2, 00:15:57.933 "num_base_bdevs_operational": 2, 00:15:57.933 "base_bdevs_list": [ 00:15:57.933 { 00:15:57.933 "name": "spare", 00:15:57.933 "uuid": "08a44727-63ea-52a3-ab94-f1e66fce1db7", 00:15:57.933 "is_configured": true, 00:15:57.933 "data_offset": 256, 00:15:57.933 "data_size": 7936 00:15:57.933 }, 00:15:57.933 { 00:15:57.933 "name": "BaseBdev2", 00:15:57.933 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:15:57.933 "is_configured": true, 00:15:57.933 "data_offset": 256, 00:15:57.933 "data_size": 7936 00:15:57.934 } 00:15:57.934 ] 00:15:57.934 }' 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.934 "name": "raid_bdev1", 00:15:57.934 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:15:57.934 "strip_size_kb": 0, 00:15:57.934 "state": "online", 00:15:57.934 "raid_level": "raid1", 00:15:57.934 "superblock": true, 00:15:57.934 "num_base_bdevs": 2, 00:15:57.934 "num_base_bdevs_discovered": 2, 00:15:57.934 "num_base_bdevs_operational": 2, 00:15:57.934 "base_bdevs_list": [ 00:15:57.934 { 00:15:57.934 "name": "spare", 00:15:57.934 "uuid": "08a44727-63ea-52a3-ab94-f1e66fce1db7", 00:15:57.934 "is_configured": true, 00:15:57.934 "data_offset": 256, 00:15:57.934 "data_size": 7936 00:15:57.934 }, 00:15:57.934 { 00:15:57.934 "name": "BaseBdev2", 00:15:57.934 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:15:57.934 "is_configured": true, 00:15:57.934 "data_offset": 256, 00:15:57.934 "data_size": 7936 00:15:57.934 } 00:15:57.934 ] 00:15:57.934 }' 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.934 21:48:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.934 "name": "raid_bdev1", 00:15:57.934 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:15:57.934 "strip_size_kb": 0, 00:15:57.934 "state": "online", 00:15:57.934 "raid_level": "raid1", 00:15:57.934 "superblock": true, 00:15:57.934 "num_base_bdevs": 2, 00:15:57.934 "num_base_bdevs_discovered": 2, 00:15:57.934 "num_base_bdevs_operational": 2, 00:15:57.934 "base_bdevs_list": [ 00:15:57.934 { 00:15:57.934 "name": "spare", 00:15:57.934 "uuid": "08a44727-63ea-52a3-ab94-f1e66fce1db7", 00:15:57.934 "is_configured": true, 00:15:57.934 "data_offset": 256, 00:15:57.934 "data_size": 7936 00:15:57.934 }, 00:15:57.934 { 00:15:57.934 "name": "BaseBdev2", 00:15:57.934 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:15:57.934 "is_configured": true, 00:15:57.934 "data_offset": 256, 00:15:57.934 "data_size": 7936 00:15:57.934 } 00:15:57.934 ] 00:15:57.934 }' 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.934 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.504 [2024-11-27 21:48:21.404960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:58.504 [2024-11-27 21:48:21.405036] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.504 [2024-11-27 21:48:21.405144] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.504 [2024-11-27 21:48:21.405242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.504 [2024-11-27 21:48:21.405295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001200 name raid_bdev1, state offline 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.504 [2024-11-27 21:48:21.476871] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:58.504 [2024-11-27 21:48:21.476921] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:58.504 [2024-11-27 21:48:21.476940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:15:58.504 [2024-11-27 21:48:21.476949] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:58.504 [2024-11-27 21:48:21.478821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:58.504 [2024-11-27 21:48:21.478858] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:58.504 [2024-11-27 21:48:21.478903] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:58.504 [2024-11-27 21:48:21.478941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:58.504 [2024-11-27 21:48:21.479016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:58.504 spare 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.504 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.505 [2024-11-27 21:48:21.578892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000001580 00:15:58.505 [2024-11-27 21:48:21.578920] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:15:58.505 [2024-11-27 21:48:21.579025] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000026d0 00:15:58.505 [2024-11-27 21:48:21.579103] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000001580 00:15:58.505 [2024-11-27 21:48:21.579113] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000001580 00:15:58.505 [2024-11-27 21:48:21.579181] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:58.505 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.764 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.764 "name": "raid_bdev1", 00:15:58.764 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:15:58.764 "strip_size_kb": 0, 00:15:58.764 "state": "online", 00:15:58.764 "raid_level": "raid1", 00:15:58.764 "superblock": true, 00:15:58.764 "num_base_bdevs": 2, 00:15:58.764 "num_base_bdevs_discovered": 2, 00:15:58.764 "num_base_bdevs_operational": 2, 00:15:58.764 "base_bdevs_list": [ 00:15:58.764 { 00:15:58.764 "name": "spare", 00:15:58.764 "uuid": "08a44727-63ea-52a3-ab94-f1e66fce1db7", 00:15:58.764 "is_configured": true, 00:15:58.764 "data_offset": 256, 00:15:58.764 "data_size": 7936 00:15:58.764 }, 00:15:58.764 { 00:15:58.764 "name": "BaseBdev2", 00:15:58.764 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:15:58.764 "is_configured": true, 00:15:58.764 "data_offset": 256, 00:15:58.764 "data_size": 7936 00:15:58.764 } 00:15:58.764 ] 00:15:58.764 }' 00:15:58.764 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.764 21:48:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.024 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.024 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.024 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.024 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.024 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.024 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.024 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.024 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.024 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.024 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.024 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.024 "name": "raid_bdev1", 00:15:59.024 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:15:59.024 "strip_size_kb": 0, 00:15:59.024 "state": "online", 00:15:59.024 "raid_level": "raid1", 00:15:59.024 "superblock": true, 00:15:59.024 "num_base_bdevs": 2, 00:15:59.024 "num_base_bdevs_discovered": 2, 00:15:59.024 "num_base_bdevs_operational": 2, 00:15:59.024 "base_bdevs_list": [ 00:15:59.024 { 00:15:59.024 "name": "spare", 00:15:59.024 "uuid": "08a44727-63ea-52a3-ab94-f1e66fce1db7", 00:15:59.024 "is_configured": true, 00:15:59.024 "data_offset": 256, 00:15:59.024 "data_size": 7936 00:15:59.024 }, 00:15:59.024 { 00:15:59.024 "name": "BaseBdev2", 00:15:59.024 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:15:59.024 "is_configured": true, 00:15:59.024 "data_offset": 256, 00:15:59.024 "data_size": 7936 00:15:59.024 } 00:15:59.024 ] 00:15:59.024 }' 00:15:59.024 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.283 [2024-11-27 21:48:22.267660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.283 "name": "raid_bdev1", 00:15:59.283 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:15:59.283 "strip_size_kb": 0, 00:15:59.283 "state": "online", 00:15:59.283 "raid_level": "raid1", 00:15:59.283 "superblock": true, 00:15:59.283 "num_base_bdevs": 2, 00:15:59.283 "num_base_bdevs_discovered": 1, 00:15:59.283 "num_base_bdevs_operational": 1, 00:15:59.283 "base_bdevs_list": [ 00:15:59.283 { 00:15:59.283 "name": null, 00:15:59.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.283 "is_configured": false, 00:15:59.283 "data_offset": 0, 00:15:59.283 "data_size": 7936 00:15:59.283 }, 00:15:59.283 { 00:15:59.283 "name": "BaseBdev2", 00:15:59.283 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:15:59.283 "is_configured": true, 00:15:59.283 "data_offset": 256, 00:15:59.283 "data_size": 7936 00:15:59.283 } 00:15:59.283 ] 00:15:59.283 }' 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.283 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.853 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:59.853 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.853 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:15:59.853 [2024-11-27 21:48:22.678955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.853 [2024-11-27 21:48:22.679134] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:59.853 [2024-11-27 21:48:22.679203] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:59.853 [2024-11-27 21:48:22.679282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.853 [2024-11-27 21:48:22.682831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000027a0 00:15:59.853 [2024-11-27 21:48:22.684635] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:59.853 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.853 21:48:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.792 "name": "raid_bdev1", 00:16:00.792 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:16:00.792 "strip_size_kb": 0, 00:16:00.792 "state": "online", 00:16:00.792 "raid_level": "raid1", 00:16:00.792 "superblock": true, 00:16:00.792 "num_base_bdevs": 2, 00:16:00.792 "num_base_bdevs_discovered": 2, 00:16:00.792 "num_base_bdevs_operational": 2, 00:16:00.792 "process": { 00:16:00.792 "type": "rebuild", 00:16:00.792 "target": "spare", 00:16:00.792 "progress": { 00:16:00.792 "blocks": 2560, 00:16:00.792 "percent": 32 00:16:00.792 } 00:16:00.792 }, 00:16:00.792 "base_bdevs_list": [ 00:16:00.792 { 00:16:00.792 "name": "spare", 00:16:00.792 "uuid": "08a44727-63ea-52a3-ab94-f1e66fce1db7", 00:16:00.792 "is_configured": true, 00:16:00.792 "data_offset": 256, 00:16:00.792 "data_size": 7936 00:16:00.792 }, 00:16:00.792 { 00:16:00.792 "name": "BaseBdev2", 00:16:00.792 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:16:00.792 "is_configured": true, 00:16:00.792 "data_offset": 256, 00:16:00.792 "data_size": 7936 00:16:00.792 } 00:16:00.792 ] 00:16:00.792 }' 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:00.792 [2024-11-27 21:48:23.835380] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.792 [2024-11-27 21:48:23.888607] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:00.792 [2024-11-27 21:48:23.888657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.792 [2024-11-27 21:48:23.888673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.792 [2024-11-27 21:48:23.888680] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.792 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.052 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.052 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.052 "name": "raid_bdev1", 00:16:01.052 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:16:01.052 "strip_size_kb": 0, 00:16:01.052 "state": "online", 00:16:01.052 "raid_level": "raid1", 00:16:01.052 "superblock": true, 00:16:01.052 "num_base_bdevs": 2, 00:16:01.052 "num_base_bdevs_discovered": 1, 00:16:01.052 "num_base_bdevs_operational": 1, 00:16:01.052 "base_bdevs_list": [ 00:16:01.052 { 00:16:01.052 "name": null, 00:16:01.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.052 "is_configured": false, 00:16:01.052 "data_offset": 0, 00:16:01.052 "data_size": 7936 00:16:01.052 }, 00:16:01.052 { 00:16:01.052 "name": "BaseBdev2", 00:16:01.052 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:16:01.052 "is_configured": true, 00:16:01.052 "data_offset": 256, 00:16:01.052 "data_size": 7936 00:16:01.052 } 00:16:01.052 ] 00:16:01.052 }' 00:16:01.052 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.052 21:48:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.312 21:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:01.312 21:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.312 21:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:01.312 [2024-11-27 21:48:24.356005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:01.312 [2024-11-27 21:48:24.356110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.312 [2024-11-27 21:48:24.356154] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:01.312 [2024-11-27 21:48:24.356194] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.312 [2024-11-27 21:48:24.356415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.312 [2024-11-27 21:48:24.356462] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:01.312 [2024-11-27 21:48:24.356554] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:01.312 [2024-11-27 21:48:24.356589] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:01.312 [2024-11-27 21:48:24.356648] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:01.312 [2024-11-27 21:48:24.356726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:01.312 [2024-11-27 21:48:24.359847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000002870 00:16:01.312 [2024-11-27 21:48:24.361718] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.312 spare 00:16:01.312 21:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.312 21:48:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:02.249 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.249 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.249 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.249 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.509 "name": "raid_bdev1", 00:16:02.509 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:16:02.509 "strip_size_kb": 0, 00:16:02.509 "state": "online", 00:16:02.509 "raid_level": "raid1", 00:16:02.509 "superblock": true, 00:16:02.509 "num_base_bdevs": 2, 00:16:02.509 "num_base_bdevs_discovered": 2, 00:16:02.509 "num_base_bdevs_operational": 2, 00:16:02.509 "process": { 00:16:02.509 "type": "rebuild", 00:16:02.509 "target": "spare", 00:16:02.509 "progress": { 00:16:02.509 "blocks": 2560, 00:16:02.509 "percent": 32 00:16:02.509 } 00:16:02.509 }, 00:16:02.509 "base_bdevs_list": [ 00:16:02.509 { 00:16:02.509 "name": "spare", 00:16:02.509 "uuid": "08a44727-63ea-52a3-ab94-f1e66fce1db7", 00:16:02.509 "is_configured": true, 00:16:02.509 "data_offset": 256, 00:16:02.509 "data_size": 7936 00:16:02.509 }, 00:16:02.509 { 00:16:02.509 "name": "BaseBdev2", 00:16:02.509 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:16:02.509 "is_configured": true, 00:16:02.509 "data_offset": 256, 00:16:02.509 "data_size": 7936 00:16:02.509 } 00:16:02.509 ] 00:16:02.509 }' 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.509 [2024-11-27 21:48:25.528961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.509 [2024-11-27 21:48:25.565787] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:02.509 [2024-11-27 21:48:25.565905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.509 [2024-11-27 21:48:25.565941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:02.509 [2024-11-27 21:48:25.565965] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.509 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.510 "name": "raid_bdev1", 00:16:02.510 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:16:02.510 "strip_size_kb": 0, 00:16:02.510 "state": "online", 00:16:02.510 "raid_level": "raid1", 00:16:02.510 "superblock": true, 00:16:02.510 "num_base_bdevs": 2, 00:16:02.510 "num_base_bdevs_discovered": 1, 00:16:02.510 "num_base_bdevs_operational": 1, 00:16:02.510 "base_bdevs_list": [ 00:16:02.510 { 00:16:02.510 "name": null, 00:16:02.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.510 "is_configured": false, 00:16:02.510 "data_offset": 0, 00:16:02.510 "data_size": 7936 00:16:02.510 }, 00:16:02.510 { 00:16:02.510 "name": "BaseBdev2", 00:16:02.510 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:16:02.510 "is_configured": true, 00:16:02.510 "data_offset": 256, 00:16:02.510 "data_size": 7936 00:16:02.510 } 00:16:02.510 ] 00:16:02.510 }' 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.510 21:48:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.078 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:03.078 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.078 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:03.078 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:03.078 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.078 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.078 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.078 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.078 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.078 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.078 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.078 "name": "raid_bdev1", 00:16:03.078 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:16:03.078 "strip_size_kb": 0, 00:16:03.078 "state": "online", 00:16:03.078 "raid_level": "raid1", 00:16:03.078 "superblock": true, 00:16:03.078 "num_base_bdevs": 2, 00:16:03.078 "num_base_bdevs_discovered": 1, 00:16:03.078 "num_base_bdevs_operational": 1, 00:16:03.078 "base_bdevs_list": [ 00:16:03.078 { 00:16:03.078 "name": null, 00:16:03.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.078 "is_configured": false, 00:16:03.078 "data_offset": 0, 00:16:03.078 "data_size": 7936 00:16:03.078 }, 00:16:03.078 { 00:16:03.078 "name": "BaseBdev2", 00:16:03.078 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:16:03.078 "is_configured": true, 00:16:03.078 "data_offset": 256, 00:16:03.078 "data_size": 7936 00:16:03.078 } 00:16:03.078 ] 00:16:03.078 }' 00:16:03.079 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.079 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:03.079 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.079 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:03.079 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:03.079 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.079 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.079 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.079 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:03.079 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.079 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:03.079 [2024-11-27 21:48:26.180685] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:03.079 [2024-11-27 21:48:26.180778] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.079 [2024-11-27 21:48:26.180809] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:03.079 [2024-11-27 21:48:26.180821] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.079 [2024-11-27 21:48:26.180971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.079 [2024-11-27 21:48:26.180999] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:03.079 [2024-11-27 21:48:26.181043] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:03.079 [2024-11-27 21:48:26.181057] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:03.079 [2024-11-27 21:48:26.181064] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:03.079 [2024-11-27 21:48:26.181076] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:03.079 BaseBdev1 00:16:03.079 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.079 21:48:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.457 "name": "raid_bdev1", 00:16:04.457 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:16:04.457 "strip_size_kb": 0, 00:16:04.457 "state": "online", 00:16:04.457 "raid_level": "raid1", 00:16:04.457 "superblock": true, 00:16:04.457 "num_base_bdevs": 2, 00:16:04.457 "num_base_bdevs_discovered": 1, 00:16:04.457 "num_base_bdevs_operational": 1, 00:16:04.457 "base_bdevs_list": [ 00:16:04.457 { 00:16:04.457 "name": null, 00:16:04.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.457 "is_configured": false, 00:16:04.457 "data_offset": 0, 00:16:04.457 "data_size": 7936 00:16:04.457 }, 00:16:04.457 { 00:16:04.457 "name": "BaseBdev2", 00:16:04.457 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:16:04.457 "is_configured": true, 00:16:04.457 "data_offset": 256, 00:16:04.457 "data_size": 7936 00:16:04.457 } 00:16:04.457 ] 00:16:04.457 }' 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.457 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.716 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.716 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.716 "name": "raid_bdev1", 00:16:04.716 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:16:04.716 "strip_size_kb": 0, 00:16:04.716 "state": "online", 00:16:04.716 "raid_level": "raid1", 00:16:04.716 "superblock": true, 00:16:04.716 "num_base_bdevs": 2, 00:16:04.716 "num_base_bdevs_discovered": 1, 00:16:04.716 "num_base_bdevs_operational": 1, 00:16:04.716 "base_bdevs_list": [ 00:16:04.716 { 00:16:04.716 "name": null, 00:16:04.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.716 "is_configured": false, 00:16:04.716 "data_offset": 0, 00:16:04.716 "data_size": 7936 00:16:04.716 }, 00:16:04.716 { 00:16:04.716 "name": "BaseBdev2", 00:16:04.716 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:16:04.716 "is_configured": true, 00:16:04.716 "data_offset": 256, 00:16:04.716 "data_size": 7936 00:16:04.716 } 00:16:04.716 ] 00:16:04.716 }' 00:16:04.716 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.716 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.716 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.716 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.716 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:04.716 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:16:04.717 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:04.717 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:04.717 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.717 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:04.717 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.717 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:04.717 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.717 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:04.717 [2024-11-27 21:48:27.706267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.717 [2024-11-27 21:48:27.706404] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:04.717 [2024-11-27 21:48:27.706416] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:04.717 request: 00:16:04.717 { 00:16:04.717 "base_bdev": "BaseBdev1", 00:16:04.717 "raid_bdev": "raid_bdev1", 00:16:04.717 "method": "bdev_raid_add_base_bdev", 00:16:04.717 "req_id": 1 00:16:04.717 } 00:16:04.717 Got JSON-RPC error response 00:16:04.717 response: 00:16:04.717 { 00:16:04.717 "code": -22, 00:16:04.717 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:04.717 } 00:16:04.717 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:04.717 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:16:04.717 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:04.717 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:04.717 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:04.717 21:48:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.654 "name": "raid_bdev1", 00:16:05.654 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:16:05.654 "strip_size_kb": 0, 00:16:05.654 "state": "online", 00:16:05.654 "raid_level": "raid1", 00:16:05.654 "superblock": true, 00:16:05.654 "num_base_bdevs": 2, 00:16:05.654 "num_base_bdevs_discovered": 1, 00:16:05.654 "num_base_bdevs_operational": 1, 00:16:05.654 "base_bdevs_list": [ 00:16:05.654 { 00:16:05.654 "name": null, 00:16:05.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.654 "is_configured": false, 00:16:05.654 "data_offset": 0, 00:16:05.654 "data_size": 7936 00:16:05.654 }, 00:16:05.654 { 00:16:05.654 "name": "BaseBdev2", 00:16:05.654 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:16:05.654 "is_configured": true, 00:16:05.654 "data_offset": 256, 00:16:05.654 "data_size": 7936 00:16:05.654 } 00:16:05.654 ] 00:16:05.654 }' 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.654 21:48:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.235 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.235 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.235 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.235 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.235 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.235 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.235 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.235 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.235 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.235 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.235 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.235 "name": "raid_bdev1", 00:16:06.235 "uuid": "f2ce1f99-ed5a-4228-bfcf-597c5b465ffc", 00:16:06.235 "strip_size_kb": 0, 00:16:06.235 "state": "online", 00:16:06.235 "raid_level": "raid1", 00:16:06.235 "superblock": true, 00:16:06.235 "num_base_bdevs": 2, 00:16:06.235 "num_base_bdevs_discovered": 1, 00:16:06.235 "num_base_bdevs_operational": 1, 00:16:06.235 "base_bdevs_list": [ 00:16:06.235 { 00:16:06.235 "name": null, 00:16:06.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.235 "is_configured": false, 00:16:06.235 "data_offset": 0, 00:16:06.235 "data_size": 7936 00:16:06.236 }, 00:16:06.236 { 00:16:06.236 "name": "BaseBdev2", 00:16:06.236 "uuid": "98c92d0f-abb1-5234-ae73-0e217e3d2cb2", 00:16:06.236 "is_configured": true, 00:16:06.236 "data_offset": 256, 00:16:06.236 "data_size": 7936 00:16:06.236 } 00:16:06.236 ] 00:16:06.236 }' 00:16:06.236 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.236 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:06.236 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.236 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:06.236 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 98979 00:16:06.236 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 98979 ']' 00:16:06.236 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 98979 00:16:06.236 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:16:06.236 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.236 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98979 00:16:06.504 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.504 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.504 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98979' 00:16:06.504 killing process with pid 98979 00:16:06.504 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 98979 00:16:06.504 Received shutdown signal, test time was about 60.000000 seconds 00:16:06.504 00:16:06.504 Latency(us) 00:16:06.504 [2024-11-27T21:48:29.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.504 [2024-11-27T21:48:29.625Z] =================================================================================================================== 00:16:06.504 [2024-11-27T21:48:29.625Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:06.504 [2024-11-27 21:48:29.361114] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:06.504 [2024-11-27 21:48:29.361218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.504 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 98979 00:16:06.504 [2024-11-27 21:48:29.361265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.504 [2024-11-27 21:48:29.361286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000001580 name raid_bdev1, state offline 00:16:06.504 [2024-11-27 21:48:29.394612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:06.504 ************************************ 00:16:06.504 END TEST raid_rebuild_test_sb_md_interleaved 00:16:06.504 ************************************ 00:16:06.504 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:16:06.504 00:16:06.504 real 0m16.164s 00:16:06.504 user 0m21.615s 00:16:06.504 sys 0m1.718s 00:16:06.504 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.504 21:48:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:16:06.765 21:48:29 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:16:06.765 21:48:29 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:16:06.765 21:48:29 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 98979 ']' 00:16:06.765 21:48:29 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 98979 00:16:06.765 21:48:29 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:16:06.765 ************************************ 00:16:06.765 END TEST bdev_raid 00:16:06.765 ************************************ 00:16:06.765 00:16:06.765 real 9m48.588s 00:16:06.765 user 14m0.299s 00:16:06.765 sys 1m44.277s 00:16:06.765 21:48:29 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.765 21:48:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:06.765 21:48:29 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:06.765 21:48:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:06.765 21:48:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.765 21:48:29 -- common/autotest_common.sh@10 -- # set +x 00:16:06.765 ************************************ 00:16:06.765 START TEST spdkcli_raid 00:16:06.765 ************************************ 00:16:06.765 21:48:29 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:06.765 * Looking for test storage... 00:16:06.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:06.765 21:48:29 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:07.026 21:48:29 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:16:07.026 21:48:29 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:07.026 21:48:29 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.026 21:48:29 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:16:07.026 21:48:29 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.026 21:48:29 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:07.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.026 --rc genhtml_branch_coverage=1 00:16:07.026 --rc genhtml_function_coverage=1 00:16:07.026 --rc genhtml_legend=1 00:16:07.026 --rc geninfo_all_blocks=1 00:16:07.026 --rc geninfo_unexecuted_blocks=1 00:16:07.026 00:16:07.026 ' 00:16:07.026 21:48:29 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:07.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.026 --rc genhtml_branch_coverage=1 00:16:07.026 --rc genhtml_function_coverage=1 00:16:07.026 --rc genhtml_legend=1 00:16:07.026 --rc geninfo_all_blocks=1 00:16:07.026 --rc geninfo_unexecuted_blocks=1 00:16:07.026 00:16:07.026 ' 00:16:07.026 21:48:29 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:07.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.026 --rc genhtml_branch_coverage=1 00:16:07.026 --rc genhtml_function_coverage=1 00:16:07.026 --rc genhtml_legend=1 00:16:07.026 --rc geninfo_all_blocks=1 00:16:07.026 --rc geninfo_unexecuted_blocks=1 00:16:07.026 00:16:07.026 ' 00:16:07.026 21:48:29 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:07.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.026 --rc genhtml_branch_coverage=1 00:16:07.026 --rc genhtml_function_coverage=1 00:16:07.026 --rc genhtml_legend=1 00:16:07.026 --rc geninfo_all_blocks=1 00:16:07.026 --rc geninfo_unexecuted_blocks=1 00:16:07.026 00:16:07.026 ' 00:16:07.026 21:48:29 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:07.026 21:48:29 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:07.026 21:48:29 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:07.026 21:48:29 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:16:07.026 21:48:29 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:16:07.026 21:48:29 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:16:07.026 21:48:29 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:16:07.026 21:48:29 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:16:07.026 21:48:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:07.026 21:48:30 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:16:07.026 21:48:30 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:16:07.026 21:48:30 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:16:07.026 21:48:30 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:16:07.026 21:48:30 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:16:07.026 21:48:30 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:16:07.026 21:48:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:07.026 21:48:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.026 21:48:30 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:16:07.026 21:48:30 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=99649 00:16:07.026 21:48:30 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:16:07.027 21:48:30 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 99649 00:16:07.027 21:48:30 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 99649 ']' 00:16:07.027 21:48:30 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.027 21:48:30 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.027 21:48:30 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:07.027 21:48:30 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.027 21:48:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.027 [2024-11-27 21:48:30.121549] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:07.027 [2024-11-27 21:48:30.121791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99649 ] 00:16:07.287 [2024-11-27 21:48:30.278889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:07.287 [2024-11-27 21:48:30.306658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:07.287 [2024-11-27 21:48:30.306678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:07.855 21:48:30 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:07.855 21:48:30 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:16:07.855 21:48:30 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:16:07.855 21:48:30 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:07.855 21:48:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:07.855 21:48:30 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:16:07.855 21:48:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:07.855 21:48:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.114 21:48:30 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:16:08.114 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:16:08.114 ' 00:16:09.494 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:16:09.494 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:16:09.494 21:48:32 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:16:09.494 21:48:32 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:09.494 21:48:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:09.754 21:48:32 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:16:09.754 21:48:32 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:09.754 21:48:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:09.754 21:48:32 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:16:09.754 ' 00:16:10.691 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:16:10.691 21:48:33 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:16:10.691 21:48:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:10.691 21:48:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:10.950 21:48:33 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:16:10.950 21:48:33 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:10.950 21:48:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:10.950 21:48:33 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:16:10.951 21:48:33 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:16:11.519 21:48:34 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:16:11.520 21:48:34 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:16:11.520 21:48:34 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:16:11.520 21:48:34 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:11.520 21:48:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:11.520 21:48:34 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:16:11.520 21:48:34 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:11.520 21:48:34 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:11.520 21:48:34 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:16:11.520 ' 00:16:12.457 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:16:12.457 21:48:35 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:16:12.457 21:48:35 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:12.457 21:48:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.457 21:48:35 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:16:12.457 21:48:35 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:12.457 21:48:35 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.457 21:48:35 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:16:12.457 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:16:12.457 ' 00:16:13.836 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:16:13.836 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:16:14.096 21:48:37 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:16:14.096 21:48:37 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:14.096 21:48:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.096 21:48:37 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 99649 00:16:14.096 21:48:37 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 99649 ']' 00:16:14.096 21:48:37 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 99649 00:16:14.096 21:48:37 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:16:14.096 21:48:37 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.096 21:48:37 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99649 00:16:14.096 killing process with pid 99649 00:16:14.096 21:48:37 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.096 21:48:37 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.096 21:48:37 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99649' 00:16:14.096 21:48:37 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 99649 00:16:14.096 21:48:37 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 99649 00:16:14.356 21:48:37 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:16:14.356 21:48:37 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 99649 ']' 00:16:14.356 21:48:37 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 99649 00:16:14.356 21:48:37 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 99649 ']' 00:16:14.356 21:48:37 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 99649 00:16:14.615 Process with pid 99649 is not found 00:16:14.615 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (99649) - No such process 00:16:14.615 21:48:37 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 99649 is not found' 00:16:14.615 21:48:37 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:16:14.615 21:48:37 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:16:14.615 21:48:37 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:16:14.615 21:48:37 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:16:14.615 00:16:14.615 real 0m7.732s 00:16:14.615 user 0m16.316s 00:16:14.615 sys 0m1.122s 00:16:14.615 21:48:37 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:14.615 21:48:37 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:16:14.615 ************************************ 00:16:14.615 END TEST spdkcli_raid 00:16:14.615 ************************************ 00:16:14.615 21:48:37 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:14.615 21:48:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:14.615 21:48:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:14.615 21:48:37 -- common/autotest_common.sh@10 -- # set +x 00:16:14.615 ************************************ 00:16:14.615 START TEST blockdev_raid5f 00:16:14.615 ************************************ 00:16:14.615 21:48:37 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:16:14.615 * Looking for test storage... 00:16:14.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:16:14.615 21:48:37 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:14.615 21:48:37 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:16:14.615 21:48:37 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:14.874 21:48:37 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:14.874 21:48:37 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.874 21:48:37 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.874 21:48:37 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.874 21:48:37 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.874 21:48:37 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.875 21:48:37 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:16:14.875 21:48:37 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.875 21:48:37 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:14.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.875 --rc genhtml_branch_coverage=1 00:16:14.875 --rc genhtml_function_coverage=1 00:16:14.875 --rc genhtml_legend=1 00:16:14.875 --rc geninfo_all_blocks=1 00:16:14.875 --rc geninfo_unexecuted_blocks=1 00:16:14.875 00:16:14.875 ' 00:16:14.875 21:48:37 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:14.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.875 --rc genhtml_branch_coverage=1 00:16:14.875 --rc genhtml_function_coverage=1 00:16:14.875 --rc genhtml_legend=1 00:16:14.875 --rc geninfo_all_blocks=1 00:16:14.875 --rc geninfo_unexecuted_blocks=1 00:16:14.875 00:16:14.875 ' 00:16:14.875 21:48:37 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:14.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.875 --rc genhtml_branch_coverage=1 00:16:14.875 --rc genhtml_function_coverage=1 00:16:14.875 --rc genhtml_legend=1 00:16:14.875 --rc geninfo_all_blocks=1 00:16:14.875 --rc geninfo_unexecuted_blocks=1 00:16:14.875 00:16:14.875 ' 00:16:14.875 21:48:37 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:14.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.875 --rc genhtml_branch_coverage=1 00:16:14.875 --rc genhtml_function_coverage=1 00:16:14.875 --rc genhtml_legend=1 00:16:14.875 --rc geninfo_all_blocks=1 00:16:14.875 --rc geninfo_unexecuted_blocks=1 00:16:14.875 00:16:14.875 ' 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=99901 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 99901 00:16:14.875 21:48:37 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:16:14.875 21:48:37 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 99901 ']' 00:16:14.875 21:48:37 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.875 21:48:37 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:14.875 21:48:37 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.875 21:48:37 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:14.875 21:48:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:14.875 [2024-11-27 21:48:37.897903] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:14.875 [2024-11-27 21:48:37.898086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99901 ] 00:16:15.135 [2024-11-27 21:48:38.050524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.135 [2024-11-27 21:48:38.076459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.705 21:48:38 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.705 21:48:38 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:16:15.705 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:16:15.705 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:16:15.705 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:16:15.705 21:48:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.705 21:48:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:15.705 Malloc0 00:16:15.705 Malloc1 00:16:15.705 Malloc2 00:16:15.705 21:48:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.705 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:16:15.705 21:48:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.705 21:48:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:15.705 21:48:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.705 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:16:15.705 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:16:15.705 21:48:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.705 21:48:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:15.705 21:48:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.705 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:16:15.705 21:48:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.705 21:48:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:15.705 21:48:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.965 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:16:15.965 21:48:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.965 21:48:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:15.965 21:48:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.965 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:16:15.965 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:16:15.965 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:16:15.965 21:48:38 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.965 21:48:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:15.965 21:48:38 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.965 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:16:15.966 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "a009e6d4-31ea-4bf2-aec9-eb54c7be4bed"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a009e6d4-31ea-4bf2-aec9-eb54c7be4bed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "a009e6d4-31ea-4bf2-aec9-eb54c7be4bed",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "ac0b76d3-6ebb-4eaa-80a2-2ae61c78b7f2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "2a6782f0-46a9-43d1-a308-6ff3137f4617",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d13140cb-afb5-4eb9-bdb5-2ec7f4da8b3e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:15.966 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:16:15.966 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:16:15.966 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:16:15.966 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:16:15.966 21:48:38 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 99901 00:16:15.966 21:48:38 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 99901 ']' 00:16:15.966 21:48:38 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 99901 00:16:15.966 21:48:38 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:16:15.966 21:48:38 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.966 21:48:38 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99901 00:16:15.966 killing process with pid 99901 00:16:15.966 21:48:38 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.966 21:48:38 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.966 21:48:38 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99901' 00:16:15.966 21:48:38 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 99901 00:16:15.966 21:48:38 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 99901 00:16:16.535 21:48:39 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:16.535 21:48:39 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:16.535 21:48:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:16.535 21:48:39 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.535 21:48:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:16.535 ************************************ 00:16:16.535 START TEST bdev_hello_world 00:16:16.535 ************************************ 00:16:16.535 21:48:39 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:16:16.535 [2024-11-27 21:48:39.477258] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:16.535 [2024-11-27 21:48:39.477386] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99941 ] 00:16:16.535 [2024-11-27 21:48:39.633437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.797 [2024-11-27 21:48:39.660900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.797 [2024-11-27 21:48:39.840999] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:16:16.797 [2024-11-27 21:48:39.841044] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:16:16.797 [2024-11-27 21:48:39.841065] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:16:16.797 [2024-11-27 21:48:39.841347] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:16:16.797 [2024-11-27 21:48:39.841476] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:16:16.797 [2024-11-27 21:48:39.841495] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:16:16.797 [2024-11-27 21:48:39.841542] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:16:16.797 00:16:16.797 [2024-11-27 21:48:39.841558] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:16:17.057 00:16:17.057 real 0m0.675s 00:16:17.057 user 0m0.350s 00:16:17.057 sys 0m0.218s 00:16:17.057 21:48:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.057 ************************************ 00:16:17.057 END TEST bdev_hello_world 00:16:17.057 ************************************ 00:16:17.057 21:48:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:16:17.057 21:48:40 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:16:17.057 21:48:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:17.057 21:48:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.057 21:48:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:17.057 ************************************ 00:16:17.057 START TEST bdev_bounds 00:16:17.057 ************************************ 00:16:17.057 21:48:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:16:17.057 21:48:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=99972 00:16:17.057 21:48:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:17.057 21:48:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:16:17.057 21:48:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 99972' 00:16:17.057 Process bdevio pid: 99972 00:16:17.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.057 21:48:40 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 99972 00:16:17.057 21:48:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 99972 ']' 00:16:17.057 21:48:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.057 21:48:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.057 21:48:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.057 21:48:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.057 21:48:40 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:17.327 [2024-11-27 21:48:40.234478] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:17.327 [2024-11-27 21:48:40.234669] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99972 ] 00:16:17.327 [2024-11-27 21:48:40.391198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:17.327 [2024-11-27 21:48:40.420180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.327 [2024-11-27 21:48:40.420292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.327 [2024-11-27 21:48:40.420394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:18.320 21:48:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.320 21:48:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:16:18.320 21:48:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:18.320 I/O targets: 00:16:18.320 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:16:18.320 00:16:18.320 00:16:18.320 CUnit - A unit testing framework for C - Version 2.1-3 00:16:18.320 http://cunit.sourceforge.net/ 00:16:18.320 00:16:18.320 00:16:18.320 Suite: bdevio tests on: raid5f 00:16:18.320 Test: blockdev write read block ...passed 00:16:18.320 Test: blockdev write zeroes read block ...passed 00:16:18.320 Test: blockdev write zeroes read no split ...passed 00:16:18.320 Test: blockdev write zeroes read split ...passed 00:16:18.320 Test: blockdev write zeroes read split partial ...passed 00:16:18.320 Test: blockdev reset ...passed 00:16:18.320 Test: blockdev write read 8 blocks ...passed 00:16:18.320 Test: blockdev write read size > 128k ...passed 00:16:18.320 Test: blockdev write read invalid size ...passed 00:16:18.320 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:18.320 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:18.320 Test: blockdev write read max offset ...passed 00:16:18.320 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:18.320 Test: blockdev writev readv 8 blocks ...passed 00:16:18.320 Test: blockdev writev readv 30 x 1block ...passed 00:16:18.320 Test: blockdev writev readv block ...passed 00:16:18.320 Test: blockdev writev readv size > 128k ...passed 00:16:18.320 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:18.320 Test: blockdev comparev and writev ...passed 00:16:18.320 Test: blockdev nvme passthru rw ...passed 00:16:18.320 Test: blockdev nvme passthru vendor specific ...passed 00:16:18.320 Test: blockdev nvme admin passthru ...passed 00:16:18.320 Test: blockdev copy ...passed 00:16:18.320 00:16:18.320 Run Summary: Type Total Ran Passed Failed Inactive 00:16:18.320 suites 1 1 n/a 0 0 00:16:18.320 tests 23 23 23 0 0 00:16:18.320 asserts 130 130 130 0 n/a 00:16:18.320 00:16:18.320 Elapsed time = 0.316 seconds 00:16:18.320 0 00:16:18.320 21:48:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 99972 00:16:18.320 21:48:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 99972 ']' 00:16:18.320 21:48:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 99972 00:16:18.320 21:48:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:16:18.320 21:48:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.320 21:48:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99972 00:16:18.321 21:48:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.321 21:48:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.321 21:48:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99972' 00:16:18.321 killing process with pid 99972 00:16:18.321 21:48:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 99972 00:16:18.321 21:48:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 99972 00:16:18.580 21:48:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:16:18.580 00:16:18.580 real 0m1.459s 00:16:18.580 user 0m3.556s 00:16:18.580 sys 0m0.372s 00:16:18.580 21:48:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.580 21:48:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:16:18.580 ************************************ 00:16:18.580 END TEST bdev_bounds 00:16:18.580 ************************************ 00:16:18.580 21:48:41 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:18.580 21:48:41 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:18.580 21:48:41 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.580 21:48:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:18.580 ************************************ 00:16:18.580 START TEST bdev_nbd 00:16:18.580 ************************************ 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=100015 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:16:18.580 21:48:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 100015 /var/tmp/spdk-nbd.sock 00:16:18.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:16:18.841 21:48:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 100015 ']' 00:16:18.841 21:48:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:16:18.841 21:48:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:18.841 21:48:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:16:18.841 21:48:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:18.841 21:48:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:18.841 [2024-11-27 21:48:41.789887] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:18.841 [2024-11-27 21:48:41.790136] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.841 [2024-11-27 21:48:41.948000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:19.101 [2024-11-27 21:48:41.975127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:19.671 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.931 1+0 records in 00:16:19.931 1+0 records out 00:16:19.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404957 s, 10.1 MB/s 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:16:19.931 21:48:42 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:20.190 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:20.190 { 00:16:20.190 "nbd_device": "/dev/nbd0", 00:16:20.190 "bdev_name": "raid5f" 00:16:20.190 } 00:16:20.190 ]' 00:16:20.190 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:20.190 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:20.190 { 00:16:20.190 "nbd_device": "/dev/nbd0", 00:16:20.190 "bdev_name": "raid5f" 00:16:20.190 } 00:16:20.190 ]' 00:16:20.190 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:20.190 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:20.190 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:20.190 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:20.190 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:20.191 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:20.191 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:20.191 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:20.191 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:20.450 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:16:20.710 /dev/nbd0 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:20.710 1+0 records in 00:16:20.710 1+0 records out 00:16:20.710 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012685 s, 3.2 MB/s 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:20.710 21:48:43 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:20.970 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:20.970 { 00:16:20.970 "nbd_device": "/dev/nbd0", 00:16:20.970 "bdev_name": "raid5f" 00:16:20.970 } 00:16:20.970 ]' 00:16:20.970 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:20.970 { 00:16:20.970 "nbd_device": "/dev/nbd0", 00:16:20.970 "bdev_name": "raid5f" 00:16:20.970 } 00:16:20.970 ]' 00:16:20.970 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:20.970 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:21.229 256+0 records in 00:16:21.229 256+0 records out 00:16:21.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144853 s, 72.4 MB/s 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:21.229 256+0 records in 00:16:21.229 256+0 records out 00:16:21.229 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299313 s, 35.0 MB/s 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:21.229 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:21.487 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:21.487 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:21.487 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:21.487 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:21.487 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:21.487 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:21.487 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:21.487 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:21.487 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:21.487 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:21.487 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:21.746 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:22.006 malloc_lvol_verify 00:16:22.006 21:48:44 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:22.006 8d896f03-1f21-473b-8ed5-e027d225a27d 00:16:22.006 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:22.265 10c5d706-3368-4156-b283-91d43ffc3472 00:16:22.265 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:22.524 /dev/nbd0 00:16:22.524 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:22.524 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:22.524 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:22.524 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:22.524 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:22.524 mke2fs 1.47.0 (5-Feb-2023) 00:16:22.524 Discarding device blocks: 0/4096 done 00:16:22.524 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:22.524 00:16:22.524 Allocating group tables: 0/1 done 00:16:22.524 Writing inode tables: 0/1 done 00:16:22.524 Creating journal (1024 blocks): done 00:16:22.524 Writing superblocks and filesystem accounting information: 0/1 done 00:16:22.524 00:16:22.524 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:22.524 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:22.524 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:22.524 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:22.524 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:22.524 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.524 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 100015 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 100015 ']' 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 100015 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100015 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.784 killing process with pid 100015 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100015' 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 100015 00:16:22.784 21:48:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 100015 00:16:23.044 21:48:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:23.044 00:16:23.044 real 0m4.435s 00:16:23.044 user 0m6.385s 00:16:23.044 sys 0m1.371s 00:16:23.044 21:48:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.044 21:48:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:23.044 ************************************ 00:16:23.044 END TEST bdev_nbd 00:16:23.044 ************************************ 00:16:23.305 21:48:46 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:16:23.305 21:48:46 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:16:23.305 21:48:46 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:16:23.305 21:48:46 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:16:23.305 21:48:46 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:23.305 21:48:46 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.305 21:48:46 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:23.305 ************************************ 00:16:23.305 START TEST bdev_fio 00:16:23.305 ************************************ 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:23.305 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:23.305 ************************************ 00:16:23.305 START TEST bdev_fio_rw_verify 00:16:23.305 ************************************ 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:23.305 21:48:46 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:23.565 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:23.565 fio-3.35 00:16:23.565 Starting 1 thread 00:16:35.775 00:16:35.776 job_raid5f: (groupid=0, jobs=1): err= 0: pid=100206: Wed Nov 27 21:48:57 2024 00:16:35.776 read: IOPS=12.3k, BW=48.2MiB/s (50.5MB/s)(482MiB/10001msec) 00:16:35.776 slat (nsec): min=18010, max=82526, avg=19687.49, stdev=1785.94 00:16:35.776 clat (usec): min=10, max=314, avg=131.74, stdev=46.48 00:16:35.776 lat (usec): min=30, max=344, avg=151.42, stdev=46.71 00:16:35.776 clat percentiles (usec): 00:16:35.776 | 50.000th=[ 137], 99.000th=[ 219], 99.900th=[ 241], 99.990th=[ 273], 00:16:35.776 | 99.999th=[ 306] 00:16:35.776 write: IOPS=12.9k, BW=50.3MiB/s (52.8MB/s)(497MiB/9874msec); 0 zone resets 00:16:35.776 slat (usec): min=7, max=235, avg=16.10, stdev= 3.55 00:16:35.776 clat (usec): min=59, max=1236, avg=297.96, stdev=40.74 00:16:35.776 lat (usec): min=74, max=1472, avg=314.05, stdev=41.75 00:16:35.776 clat percentiles (usec): 00:16:35.776 | 50.000th=[ 302], 99.000th=[ 375], 99.900th=[ 603], 99.990th=[ 1090], 00:16:35.776 | 99.999th=[ 1172] 00:16:35.776 bw ( KiB/s): min=47976, max=54192, per=98.87%, avg=50960.53, stdev=1438.42, samples=19 00:16:35.776 iops : min=11994, max=13548, avg=12740.11, stdev=359.61, samples=19 00:16:35.776 lat (usec) : 20=0.01%, 50=0.01%, 100=16.06%, 250=39.12%, 500=44.75% 00:16:35.776 lat (usec) : 750=0.04%, 1000=0.02% 00:16:35.776 lat (msec) : 2=0.01% 00:16:35.776 cpu : usr=98.93%, sys=0.43%, ctx=16, majf=0, minf=13157 00:16:35.776 IO depths : 1=7.6%, 2=19.7%, 4=55.3%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:35.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.776 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:35.776 issued rwts: total=123300,127238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:35.776 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:35.776 00:16:35.776 Run status group 0 (all jobs): 00:16:35.776 READ: bw=48.2MiB/s (50.5MB/s), 48.2MiB/s-48.2MiB/s (50.5MB/s-50.5MB/s), io=482MiB (505MB), run=10001-10001msec 00:16:35.776 WRITE: bw=50.3MiB/s (52.8MB/s), 50.3MiB/s-50.3MiB/s (52.8MB/s-52.8MB/s), io=497MiB (521MB), run=9874-9874msec 00:16:35.776 ----------------------------------------------------- 00:16:35.776 Suppressions used: 00:16:35.776 count bytes template 00:16:35.776 1 7 /usr/src/fio/parse.c 00:16:35.776 121 11616 /usr/src/fio/iolog.c 00:16:35.776 1 8 libtcmalloc_minimal.so 00:16:35.776 1 904 libcrypto.so 00:16:35.776 ----------------------------------------------------- 00:16:35.776 00:16:35.776 00:16:35.776 real 0m11.234s 00:16:35.776 user 0m11.585s 00:16:35.776 sys 0m0.674s 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:35.776 ************************************ 00:16:35.776 END TEST bdev_fio_rw_verify 00:16:35.776 ************************************ 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "a009e6d4-31ea-4bf2-aec9-eb54c7be4bed"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "a009e6d4-31ea-4bf2-aec9-eb54c7be4bed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "a009e6d4-31ea-4bf2-aec9-eb54c7be4bed",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "ac0b76d3-6ebb-4eaa-80a2-2ae61c78b7f2",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "2a6782f0-46a9-43d1-a308-6ff3137f4617",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d13140cb-afb5-4eb9-bdb5-2ec7f4da8b3e",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:35.776 /home/vagrant/spdk_repo/spdk 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:35.776 00:16:35.776 real 0m11.525s 00:16:35.776 user 0m11.701s 00:16:35.776 sys 0m0.818s 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.776 21:48:57 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:35.776 ************************************ 00:16:35.776 END TEST bdev_fio 00:16:35.776 ************************************ 00:16:35.776 21:48:57 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:35.776 21:48:57 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:35.776 21:48:57 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:35.776 21:48:57 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.776 21:48:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:35.776 ************************************ 00:16:35.776 START TEST bdev_verify 00:16:35.776 ************************************ 00:16:35.776 21:48:57 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:35.776 [2024-11-27 21:48:57.883855] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:35.776 [2024-11-27 21:48:57.883983] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100363 ] 00:16:35.776 [2024-11-27 21:48:58.039121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:35.776 [2024-11-27 21:48:58.068661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:35.776 [2024-11-27 21:48:58.068755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.776 Running I/O for 5 seconds... 00:16:37.278 10618.00 IOPS, 41.48 MiB/s [2024-11-27T21:49:01.334Z] 10725.00 IOPS, 41.89 MiB/s [2024-11-27T21:49:02.713Z] 10766.00 IOPS, 42.05 MiB/s [2024-11-27T21:49:03.281Z] 10762.25 IOPS, 42.04 MiB/s [2024-11-27T21:49:03.540Z] 10779.40 IOPS, 42.11 MiB/s 00:16:40.420 Latency(us) 00:16:40.420 [2024-11-27T21:49:03.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.420 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:40.420 Verification LBA range: start 0x0 length 0x2000 00:16:40.420 raid5f : 5.02 4280.59 16.72 0.00 0.00 44936.68 1180.51 31594.65 00:16:40.420 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:40.420 Verification LBA range: start 0x2000 length 0x2000 00:16:40.420 raid5f : 5.02 6508.21 25.42 0.00 0.00 29588.56 178.86 22322.31 00:16:40.420 [2024-11-27T21:49:03.541Z] =================================================================================================================== 00:16:40.420 [2024-11-27T21:49:03.541Z] Total : 10788.80 42.14 0.00 0.00 35677.30 178.86 31594.65 00:16:40.420 00:16:40.420 real 0m5.711s 00:16:40.420 user 0m10.673s 00:16:40.420 sys 0m0.219s 00:16:40.420 21:49:03 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.420 21:49:03 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:40.420 ************************************ 00:16:40.420 END TEST bdev_verify 00:16:40.420 ************************************ 00:16:40.679 21:49:03 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:40.679 21:49:03 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:40.679 21:49:03 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.679 21:49:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:40.679 ************************************ 00:16:40.679 START TEST bdev_verify_big_io 00:16:40.679 ************************************ 00:16:40.679 21:49:03 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:40.679 [2024-11-27 21:49:03.667355] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:40.679 [2024-11-27 21:49:03.667472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100443 ] 00:16:40.938 [2024-11-27 21:49:03.822999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:40.938 [2024-11-27 21:49:03.852822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.938 [2024-11-27 21:49:03.852946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.938 Running I/O for 5 seconds... 00:16:43.253 633.00 IOPS, 39.56 MiB/s [2024-11-27T21:49:07.312Z] 760.00 IOPS, 47.50 MiB/s [2024-11-27T21:49:08.251Z] 761.33 IOPS, 47.58 MiB/s [2024-11-27T21:49:09.188Z] 777.00 IOPS, 48.56 MiB/s [2024-11-27T21:49:09.448Z] 786.60 IOPS, 49.16 MiB/s 00:16:46.327 Latency(us) 00:16:46.327 [2024-11-27T21:49:09.448Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.327 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:46.327 Verification LBA range: start 0x0 length 0x200 00:16:46.327 raid5f : 5.15 345.09 21.57 0.00 0.00 9160385.94 259.35 386462.07 00:16:46.327 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:46.327 Verification LBA range: start 0x200 length 0x200 00:16:46.327 raid5f : 5.18 452.92 28.31 0.00 0.00 7005012.44 165.45 305872.82 00:16:46.327 [2024-11-27T21:49:09.448Z] =================================================================================================================== 00:16:46.327 [2024-11-27T21:49:09.448Z] Total : 798.02 49.88 0.00 0.00 7933818.57 165.45 386462.07 00:16:46.586 00:16:46.586 real 0m5.876s 00:16:46.586 user 0m10.980s 00:16:46.586 sys 0m0.236s 00:16:46.586 21:49:09 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.586 21:49:09 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.586 ************************************ 00:16:46.586 END TEST bdev_verify_big_io 00:16:46.586 ************************************ 00:16:46.586 21:49:09 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:46.586 21:49:09 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:46.586 21:49:09 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.586 21:49:09 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:46.586 ************************************ 00:16:46.586 START TEST bdev_write_zeroes 00:16:46.586 ************************************ 00:16:46.586 21:49:09 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:46.586 [2024-11-27 21:49:09.621263] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:46.586 [2024-11-27 21:49:09.621384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100526 ] 00:16:46.846 [2024-11-27 21:49:09.780299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.846 [2024-11-27 21:49:09.808497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.105 Running I/O for 1 seconds... 00:16:48.045 29511.00 IOPS, 115.28 MiB/s 00:16:48.045 Latency(us) 00:16:48.045 [2024-11-27T21:49:11.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:48.045 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:48.045 raid5f : 1.01 29491.62 115.20 0.00 0.00 4327.72 1323.60 5895.38 00:16:48.045 [2024-11-27T21:49:11.166Z] =================================================================================================================== 00:16:48.045 [2024-11-27T21:49:11.166Z] Total : 29491.62 115.20 0.00 0.00 4327.72 1323.60 5895.38 00:16:48.305 00:16:48.305 real 0m1.681s 00:16:48.305 user 0m1.365s 00:16:48.305 sys 0m0.204s 00:16:48.305 21:49:11 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.305 21:49:11 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:48.305 ************************************ 00:16:48.305 END TEST bdev_write_zeroes 00:16:48.305 ************************************ 00:16:48.305 21:49:11 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:48.305 21:49:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:48.305 21:49:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.305 21:49:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:48.305 ************************************ 00:16:48.305 START TEST bdev_json_nonenclosed 00:16:48.305 ************************************ 00:16:48.305 21:49:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:48.305 [2024-11-27 21:49:11.379856] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:48.305 [2024-11-27 21:49:11.379980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100563 ] 00:16:48.566 [2024-11-27 21:49:11.537987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.566 [2024-11-27 21:49:11.562704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.566 [2024-11-27 21:49:11.562803] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:48.566 [2024-11-27 21:49:11.562830] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:48.566 [2024-11-27 21:49:11.562842] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:48.566 00:16:48.566 real 0m0.353s 00:16:48.566 user 0m0.138s 00:16:48.566 sys 0m0.111s 00:16:48.566 21:49:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.566 21:49:11 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:48.566 ************************************ 00:16:48.566 END TEST bdev_json_nonenclosed 00:16:48.566 ************************************ 00:16:48.826 21:49:11 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:48.826 21:49:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:48.826 21:49:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.826 21:49:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:48.826 ************************************ 00:16:48.826 START TEST bdev_json_nonarray 00:16:48.826 ************************************ 00:16:48.826 21:49:11 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:48.826 [2024-11-27 21:49:11.804666] Starting SPDK v25.01-pre git sha1 35cd3e84d / DPDK 22.11.4 initialization... 00:16:48.826 [2024-11-27 21:49:11.804782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100588 ] 00:16:49.113 [2024-11-27 21:49:11.962370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.113 [2024-11-27 21:49:11.993424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.113 [2024-11-27 21:49:11.993535] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:49.113 [2024-11-27 21:49:11.993554] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:49.113 [2024-11-27 21:49:11.993568] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:49.113 00:16:49.113 real 0m0.362s 00:16:49.113 user 0m0.144s 00:16:49.113 sys 0m0.114s 00:16:49.113 21:49:12 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.113 21:49:12 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:49.113 ************************************ 00:16:49.113 END TEST bdev_json_nonarray 00:16:49.113 ************************************ 00:16:49.113 21:49:12 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:16:49.113 21:49:12 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:16:49.113 21:49:12 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:16:49.113 21:49:12 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:49.113 21:49:12 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:16:49.113 21:49:12 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:49.113 21:49:12 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:49.113 21:49:12 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:16:49.113 21:49:12 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:16:49.113 21:49:12 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:16:49.113 21:49:12 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:16:49.113 00:16:49.113 real 0m34.598s 00:16:49.113 user 0m47.232s 00:16:49.113 sys 0m4.742s 00:16:49.113 21:49:12 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:49.113 21:49:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:16:49.113 ************************************ 00:16:49.113 END TEST blockdev_raid5f 00:16:49.113 ************************************ 00:16:49.418 21:49:12 -- spdk/autotest.sh@194 -- # uname -s 00:16:49.418 21:49:12 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:16:49.418 21:49:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:49.418 21:49:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:49.418 21:49:12 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@260 -- # timing_exit lib 00:16:49.418 21:49:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:49.418 21:49:12 -- common/autotest_common.sh@10 -- # set +x 00:16:49.418 21:49:12 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:16:49.418 21:49:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:16:49.418 21:49:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:16:49.418 21:49:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:16:49.418 21:49:12 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:16:49.418 21:49:12 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:16:49.418 21:49:12 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:16:49.418 21:49:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:49.418 21:49:12 -- common/autotest_common.sh@10 -- # set +x 00:16:49.418 21:49:12 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:16:49.418 21:49:12 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:16:49.418 21:49:12 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:16:49.418 21:49:12 -- common/autotest_common.sh@10 -- # set +x 00:16:51.971 INFO: APP EXITING 00:16:51.971 INFO: killing all VMs 00:16:51.971 INFO: killing vhost app 00:16:51.971 INFO: EXIT DONE 00:16:52.231 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:52.231 Waiting for block devices as requested 00:16:52.231 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:52.231 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:53.170 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:53.430 Cleaning 00:16:53.430 Removing: /var/run/dpdk/spdk0/config 00:16:53.430 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:16:53.430 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:16:53.430 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:16:53.430 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:16:53.430 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:16:53.430 Removing: /var/run/dpdk/spdk0/hugepage_info 00:16:53.430 Removing: /dev/shm/spdk_tgt_trace.pid68884 00:16:53.430 Removing: /var/run/dpdk/spdk0 00:16:53.430 Removing: /var/run/dpdk/spdk_pid100197 00:16:53.430 Removing: /var/run/dpdk/spdk_pid100363 00:16:53.430 Removing: /var/run/dpdk/spdk_pid100443 00:16:53.430 Removing: /var/run/dpdk/spdk_pid100526 00:16:53.430 Removing: /var/run/dpdk/spdk_pid100563 00:16:53.430 Removing: /var/run/dpdk/spdk_pid100588 00:16:53.430 Removing: /var/run/dpdk/spdk_pid68720 00:16:53.430 Removing: /var/run/dpdk/spdk_pid68884 00:16:53.430 Removing: /var/run/dpdk/spdk_pid69091 00:16:53.430 Removing: /var/run/dpdk/spdk_pid69173 00:16:53.430 Removing: /var/run/dpdk/spdk_pid69201 00:16:53.430 Removing: /var/run/dpdk/spdk_pid69313 00:16:53.430 Removing: /var/run/dpdk/spdk_pid69331 00:16:53.430 Removing: /var/run/dpdk/spdk_pid69519 00:16:53.430 Removing: /var/run/dpdk/spdk_pid69587 00:16:53.430 Removing: /var/run/dpdk/spdk_pid69672 00:16:53.430 Removing: /var/run/dpdk/spdk_pid69772 00:16:53.430 Removing: /var/run/dpdk/spdk_pid69847 00:16:53.430 Removing: /var/run/dpdk/spdk_pid69892 00:16:53.430 Removing: /var/run/dpdk/spdk_pid69923 00:16:53.430 Removing: /var/run/dpdk/spdk_pid69998 00:16:53.430 Removing: /var/run/dpdk/spdk_pid70105 00:16:53.430 Removing: /var/run/dpdk/spdk_pid70531 00:16:53.430 Removing: /var/run/dpdk/spdk_pid70579 00:16:53.430 Removing: /var/run/dpdk/spdk_pid70631 00:16:53.430 Removing: /var/run/dpdk/spdk_pid70647 00:16:53.430 Removing: /var/run/dpdk/spdk_pid70707 00:16:53.430 Removing: /var/run/dpdk/spdk_pid70723 00:16:53.430 Removing: /var/run/dpdk/spdk_pid70781 00:16:53.430 Removing: /var/run/dpdk/spdk_pid70797 00:16:53.430 Removing: /var/run/dpdk/spdk_pid70850 00:16:53.430 Removing: /var/run/dpdk/spdk_pid70868 00:16:53.430 Removing: /var/run/dpdk/spdk_pid70910 00:16:53.430 Removing: /var/run/dpdk/spdk_pid70928 00:16:53.430 Removing: /var/run/dpdk/spdk_pid71055 00:16:53.430 Removing: /var/run/dpdk/spdk_pid71092 00:16:53.430 Removing: /var/run/dpdk/spdk_pid71175 00:16:53.430 Removing: /var/run/dpdk/spdk_pid72340 00:16:53.430 Removing: /var/run/dpdk/spdk_pid72541 00:16:53.430 Removing: /var/run/dpdk/spdk_pid72670 00:16:53.430 Removing: /var/run/dpdk/spdk_pid73269 00:16:53.690 Removing: /var/run/dpdk/spdk_pid73470 00:16:53.690 Removing: /var/run/dpdk/spdk_pid73599 00:16:53.690 Removing: /var/run/dpdk/spdk_pid74209 00:16:53.690 Removing: /var/run/dpdk/spdk_pid74517 00:16:53.690 Removing: /var/run/dpdk/spdk_pid74657 00:16:53.690 Removing: /var/run/dpdk/spdk_pid75981 00:16:53.690 Removing: /var/run/dpdk/spdk_pid76222 00:16:53.690 Removing: /var/run/dpdk/spdk_pid76352 00:16:53.690 Removing: /var/run/dpdk/spdk_pid77688 00:16:53.690 Removing: /var/run/dpdk/spdk_pid77930 00:16:53.690 Removing: /var/run/dpdk/spdk_pid78059 00:16:53.690 Removing: /var/run/dpdk/spdk_pid79389 00:16:53.690 Removing: /var/run/dpdk/spdk_pid79818 00:16:53.690 Removing: /var/run/dpdk/spdk_pid79947 00:16:53.690 Removing: /var/run/dpdk/spdk_pid81377 00:16:53.690 Removing: /var/run/dpdk/spdk_pid81625 00:16:53.690 Removing: /var/run/dpdk/spdk_pid81754 00:16:53.690 Removing: /var/run/dpdk/spdk_pid83184 00:16:53.690 Removing: /var/run/dpdk/spdk_pid83432 00:16:53.690 Removing: /var/run/dpdk/spdk_pid83561 00:16:53.690 Removing: /var/run/dpdk/spdk_pid84990 00:16:53.690 Removing: /var/run/dpdk/spdk_pid85463 00:16:53.690 Removing: /var/run/dpdk/spdk_pid85598 00:16:53.690 Removing: /var/run/dpdk/spdk_pid85725 00:16:53.690 Removing: /var/run/dpdk/spdk_pid86125 00:16:53.690 Removing: /var/run/dpdk/spdk_pid86837 00:16:53.690 Removing: /var/run/dpdk/spdk_pid87194 00:16:53.690 Removing: /var/run/dpdk/spdk_pid87869 00:16:53.690 Removing: /var/run/dpdk/spdk_pid88299 00:16:53.690 Removing: /var/run/dpdk/spdk_pid89032 00:16:53.690 Removing: /var/run/dpdk/spdk_pid89431 00:16:53.690 Removing: /var/run/dpdk/spdk_pid91337 00:16:53.690 Removing: /var/run/dpdk/spdk_pid91764 00:16:53.690 Removing: /var/run/dpdk/spdk_pid92187 00:16:53.690 Removing: /var/run/dpdk/spdk_pid94210 00:16:53.690 Removing: /var/run/dpdk/spdk_pid94679 00:16:53.690 Removing: /var/run/dpdk/spdk_pid95184 00:16:53.690 Removing: /var/run/dpdk/spdk_pid96212 00:16:53.690 Removing: /var/run/dpdk/spdk_pid96518 00:16:53.690 Removing: /var/run/dpdk/spdk_pid97433 00:16:53.690 Removing: /var/run/dpdk/spdk_pid97750 00:16:53.690 Removing: /var/run/dpdk/spdk_pid98667 00:16:53.690 Removing: /var/run/dpdk/spdk_pid98979 00:16:53.690 Removing: /var/run/dpdk/spdk_pid99649 00:16:53.690 Removing: /var/run/dpdk/spdk_pid99901 00:16:53.690 Removing: /var/run/dpdk/spdk_pid99941 00:16:53.690 Removing: /var/run/dpdk/spdk_pid99972 00:16:53.690 Clean 00:16:53.950 21:49:16 -- common/autotest_common.sh@1453 -- # return 0 00:16:53.950 21:49:16 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:16:53.950 21:49:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:53.950 21:49:16 -- common/autotest_common.sh@10 -- # set +x 00:16:53.950 21:49:16 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:16:53.950 21:49:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:53.950 21:49:16 -- common/autotest_common.sh@10 -- # set +x 00:16:53.950 21:49:16 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:16:53.950 21:49:16 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:16:53.950 21:49:16 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:16:53.950 21:49:16 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:16:53.950 21:49:16 -- spdk/autotest.sh@398 -- # hostname 00:16:53.950 21:49:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:16:54.210 geninfo: WARNING: invalid characters removed from testname! 00:17:20.783 21:49:40 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:20.783 21:49:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:22.692 21:49:45 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:25.230 21:49:47 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:27.139 21:49:49 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:29.047 21:49:52 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:17:31.591 21:49:54 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:17:31.591 21:49:54 -- spdk/autorun.sh@1 -- $ timing_finish 00:17:31.591 21:49:54 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:17:31.591 21:49:54 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:17:31.591 21:49:54 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:17:31.591 21:49:54 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:17:31.591 + [[ -n 6155 ]] 00:17:31.591 + sudo kill 6155 00:17:31.602 [Pipeline] } 00:17:31.618 [Pipeline] // timeout 00:17:31.624 [Pipeline] } 00:17:31.638 [Pipeline] // stage 00:17:31.644 [Pipeline] } 00:17:31.658 [Pipeline] // catchError 00:17:31.668 [Pipeline] stage 00:17:31.670 [Pipeline] { (Stop VM) 00:17:31.682 [Pipeline] sh 00:17:31.966 + vagrant halt 00:17:34.516 ==> default: Halting domain... 00:17:42.723 [Pipeline] sh 00:17:43.006 + vagrant destroy -f 00:17:45.545 ==> default: Removing domain... 00:17:45.557 [Pipeline] sh 00:17:45.839 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:17:45.849 [Pipeline] } 00:17:45.864 [Pipeline] // stage 00:17:45.869 [Pipeline] } 00:17:45.883 [Pipeline] // dir 00:17:45.889 [Pipeline] } 00:17:45.903 [Pipeline] // wrap 00:17:45.909 [Pipeline] } 00:17:45.922 [Pipeline] // catchError 00:17:45.931 [Pipeline] stage 00:17:45.933 [Pipeline] { (Epilogue) 00:17:45.945 [Pipeline] sh 00:17:46.230 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:17:50.437 [Pipeline] catchError 00:17:50.439 [Pipeline] { 00:17:50.450 [Pipeline] sh 00:17:50.733 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:17:50.733 Artifacts sizes are good 00:17:50.742 [Pipeline] } 00:17:50.755 [Pipeline] // catchError 00:17:50.765 [Pipeline] archiveArtifacts 00:17:50.772 Archiving artifacts 00:17:50.872 [Pipeline] cleanWs 00:17:50.885 [WS-CLEANUP] Deleting project workspace... 00:17:50.885 [WS-CLEANUP] Deferred wipeout is used... 00:17:50.893 [WS-CLEANUP] done 00:17:50.895 [Pipeline] } 00:17:50.911 [Pipeline] // stage 00:17:50.916 [Pipeline] } 00:17:50.930 [Pipeline] // node 00:17:50.936 [Pipeline] End of Pipeline 00:17:50.976 Finished: SUCCESS